# Datasets: taln-ls2n /semeval-2010-pre

Languages: en
Multilinguality: monolingual
Size Categories: n<1K
Language Creators: unknown
Annotations Creators: unknown
Dataset Preview
id (string)title (string)abstract (string)keyphrases (json)prmu (json)lvl-1 (string)lvl-2 (string)lvl-3 (string)lvl-4 (string)
J-39
The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution
Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue.
[ "sequenti auction problem", "empir analysi", "bid strategi", "multipl auction", "strateg behavior", "commodit market", "comput simul", "market effect", "ebai", "option-base extens", "proxi-bid system", "trade opportun", "electron marketplac", "busi-to-consum auction", "autom trade agent", "onlin auction", "option", "proxi bid" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "M", "U", "U", "M", "U", "M", "U", "M" ]
The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution ∗ Adam I. Juda Division of Engineering and Applied Sciences Harvard University, Harvard Business School ajuda@hbs.edu David C. Parkes Division of Engineering and Applied Sciences Harvard University parkes@eecs.harvard.edu ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBays proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. Categories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Design, Economics 1. INTRODUCTION Electronic markets represent an application of information systems that has generated significant new trading opportunities while allowing for the dynamic pricing of goods. In addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]). Many authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1]. There is still little evidence of automated trading in e-markets, though. We believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs. Without this, we do not expect individual consumers, or firms, to be confident in placing their business in the hands of an automated agent. One of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was $44B. Among items listed on eBay, many are essentially identical. This is especially true in the Consumer Electronics category [9], which accounted for roughly$3.5B of eBays gross merchandise volume in 2005. This presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem. For example, Alice may want an LCD monitor, and could potentially bid in either a 1 oclock or 3 oclock eBay auction. While Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the wrong auction. This is a problem of multiple copies. Another problem bidders may face is the exposure problem. As investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions.1 For example, if Alice values a video game console by itself for $200, a video game by itself for$30, and both a console and game for $250, Alice must determine how much of the$20 of synergy value she might include in her bid for the console alone. Both problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations. Why might the sequential auction problem be bad? Complex games may lead to bidders employing costly strategies and making mistakes. Potential bidders who do not wish to bear such costs may choose not to participate in the 1 The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions. The problem is also a familiar one of online decision making. 180 market, inhibiting seller revenue opportunities. Additionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities. We are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties. 1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems. Return policies alleviate the exposure problem by allowing customers to return goods at the purchase price. Price matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18]. Furthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item. These two retail policies provide the basis for the scheme proposed in this paper.2 We extend the proxy bidding technology currently employed by eBay. Our super-proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies. The extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions. A seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option. Buyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good(s). The proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions. A proxy agent exercises options held when the buyers patience has expired, choosing options that maximize a buyers payoff given the reported valuation. All other options are returned to the market and not exercised. The options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics. We conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005. LCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem. We first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period. This model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market. We also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments. Using this estimate, one can approximate how much greater a bidders true value is 2 Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices. However, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers'' bids. from the maximum bid they were observed to have placed on eBay. Based on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient. 1.2 Related Work A number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20]. These papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions. Peters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium. However, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders. Previous work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2]. Previous work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction. Iwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem. In other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24]. Most similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem. However, their work uses costly options and does not remove the sequential bidding problem completely. Work on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time. We leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol. The special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation. Jiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions. 2. EBAY AND THE DELL E193FP The most common type of auction held on eBay is a singleitem proxy auction. Auctions open at a given time and remain open for a set period of time (usually one week). Bidders bid for the item by giving a proxy a value ceiling. The proxy will bid on behalf of the bidder only as much as is necessary to maintain a winning position in the auction, up to the ceiling received from the bidder. Bidders may communicate with the proxy multiple times before an auction closes. In the event that a bidders proxy has been outbid, a bidder may give the proxy a higher ceiling to use in the auction. eBays proxy auction implements an incremental version of a Vickrey auction, with the item sold to the highest bidder for the second-highest bid plus a small increment. 181 10 0 10 1 10 2 10 3 10 4 10 0 10 1 10 2 10 3 10 4 Number of Auctions NumberofBidders Auctions Available Auctions in Which Bid Figure 1: Histogram of number of LCD auctions available to each bidder and number of LCD auctions in which a bidder participates. The market analyzed in this paper is that of a specific model of an LCD monitor, a 19 Dell LCD model E193FP. This market was selected for a variety of reasons including: • The mean price of the monitor was $240 (with standard deviation$32), so we believe it reasonable to assume that bidders on the whole are only interested in acquiring one copy of the item on eBay.3 • The volume transacted is fairly high, at approximately 500 units sold per month. • The item is not usually bundled with other items. • The item is typically sold as new, and so suitable for the price-matching of the options-based scheme. Raw auction information was acquired via a PERL script. The script accesses the eBay search engine,4 and returns all auctions containing the terms Dell'' and LCD'' that have closed within the past month.5 Data was stored in a text file for post-processing. To isolate the auctions in the domain of interest, queries were made against the titles of eBay auctions that closed between 27 May, 2005 through 1 October, 2005.6 Figure 1 provides a general sense of how many LCD auctions occur while a bidder is interested in pursuing a monitor.7 8,746 bidders (86%) had more than one auction available between when they first placed a bid on eBay and the 3 For reference, Dells October 2005 mail order catalogue quotes the price of the monitor as being $379 without a desktop purchase, and$240 as part of a desktop purchase upgrade. 4 http://search.ebay.com 5 The search is not case-sensitive. 6 Specifically, the query found all auctions where the title contained all of the following strings: Dell,'' LCD'' and E193FP,'' while excluding all auctions that contained any of the following strings: Dimension,'' GHZ,'' desktop,'' p4'' and GB.'' The exclusion terms were incorporated so that the only auctions analyzed would be those selling exclusively the LCD of interest. For example, the few bundled auctions selling both a Dell Dimension desktop and the E193FP LCD are excluded. 7 As a reference, most auctions close on eBay between noon and midnight EDT, with almost two auctions for the Dell LCD monitor closing each hour on average during peak time periods. Bidders have an average observed patience of 3.9 days (with a standard deviation of 11.4 days). latest closing time of an auction in which they bid (with an average of 78 auctions available). Figure 1 also illustrates the number of auctions in which each bidder participates. Only 32.3% of bidders who had more than one auction available are observed to bid in more than one auction (bidding in 3.6 auctions on average). A simple regression analysis shows that bidders tend to submit maximal bids to an auction that are $1.22 higher after spending twice as much time in the system, as well as bids that are$0.27 higher in each subsequent auction. Among the 508 bidders that won exactly one monitor and participated in multiple auctions, 201 (40%) paid more than $10 more than the closing price of another auction in which they bid, paying on average$35 more (standard deviation $21) than the closing price of the cheapest auction in which they bid but did not win. Furthermore, among the 2,216 bidders that never won an item despite participating in multiple auctions, 421 (19%) placed a losing bid in one auction that was more than$10 higher than the closing price of another auction in which they bid, submitting a losing bid on average $34 more (standard deviation$23) than the closing price of the cheapest auction in which they bid but did not win. Although these measures do not say a bidder that lost could have definitively won (because we only consider the final winning price and not the bid of the winner to her proxy), or a bidder that won could have secured a better price, this is at least indicative of some bidder mistakes. 3. MODELING THE SEQUENTIAL AUCTION PROBLEM While the eBay analysis was for simple bidders who desire only a single item, let us now consider a more general scenario where people may desire multiple goods of different types, possessing general valuations over those goods. Consider a world with buyers (sometimes called bidders) B and K different types of goods G1...GK . Let T = {0, 1, ...} denote time periods. Let L denote a bundle of goods, represented as a vector of size K, where Lk ∈ {0, 1} denotes the quantity of good type Gk in the bundle.8 The type of a buyer i ∈ B is (ai, di, vi), with arrival time ai ∈ T, departure time di ∈ T, and private valuation vi(L) ≥ 0 for each bundle of goods L received between ai and di, and zero value otherwise. The arrival time models the period in which a buyer first realizes her demand and enters the market, while the departure time models the period in which a buyer loses interest in acquiring the good(s). In settings with general valuations, we need an additional assumption: an upper bound on the difference between a buyers arrival and departure, denoted ΔMax. Buyers have quasi-linear utilities, so that the utility of buyer i receiving bundle L and paying p, in some period no later than di, is ui(L, p) = vi(L) − p. Each seller j ∈ S brings a single item kj to the market, has no intrinsic value and wants to maximize revenue. Seller j has an arrival time, aj, which models the period in which she is first interested in listing the item, while the departure time, dj, models the latest period in which she is willing to consider having an auction for the item close. A seller will receive payment by the end of the reported departure of the winning buyer. 8 We extend notation whereby a single item k of type Gk refers to a vector L : Lk = 1. 182 We say an individual auction in a sequence is locally strategyproof (LSP) if truthful bidding is a dominant strategy for a buyer that can only bid in that auction. Consider the following example to see that LSP is insufficient for the existence of a dominant bidding strategy for buyers facing a sequence of auctions. Example 1. Alice values one ton of Sand with one ton of Stone at $2, 000. Bob holds a Vickrey auction for one ton of Sand on Monday and a Vickrey auction for one ton of Stone on Tuesday. Alice has no dominant bidding strategy because she needs to know the price for Stone on Tuesday to know her maximum willingness to pay for Sand on Monday. Definition 1. The sequential auction problem. Given a sequence of auctions, despite each auction being locally strategyproof, a bidder has no dominant bidding strategy. Consider a sequence of auctions. Generally, auctions selling the same item will be uncertainly-ordered, because a buyer will not know the ordering of closing prices among the auctions. Define the interesting bundles for a buyer as all bundles that could maximize the buyers profit for some combination of auctions and bids of other buyers.9 Within the interesting bundles, say that an item has uncertain marginal value if the marginal value of an item depends on the other goods held by the buyer.10 Say that an item is oversupplied if there is more than one auction offering an item of that type. Say two bundles are substitutes if one of those bundles has the same value as the union of both bundles.11 Proposition 1. Given locally strategyproof single-item auctions, the sequential auction problem exists for a bidder if and only if either of the following two conditions is true: (1) within the set of interesting bundles (a) there are two bundles that are substitutes, (b) there is an item with uncertain marginal value, or (c) there is an item that is over-supplied; (2) a bidder faces competitors'' bids that are conditioned on the bidders past bids. Proof. (Sketch.) (⇐) A bidder does not have a dominant strategy when (a) she does not know which bundle among substitutes to pursue, (b) she faces the exposure problem, or (c) she faces the multiple copies problem. Additionally, a bidder does not have a dominant strategy when she does not how to optimally influence the bids of competitors. (⇒) By contradiction. A bidder has a dominant strategy to bid its constant marginal value for a given item in each auction available when conditions (1) and (2) are both false. For example, the following buyers all face the sequential auction problem as a result of condition (a), (b) and (c) respectively: a buyer who values one ton of Sand for$1,000, or one ton of Stone for $2,000, but not both Sand and Stone; a buyer who values one ton of Sand for$1,000, one ton of Stone for $300, and one ton of Sand and one ton of Stone for$1,500, and can participate in an auction for Sand before an auction for Stone; a buyer who values one ton of Sand for $1,000 and can participate in many auctions selling Sand. 9 Assume that the empty set is an interesting bundle. 10 Formally, an item k has uncertain marginal value if |{m : m = vi(Q) − vi(Q − k), ∀Q ⊆ L ∈ InterestingBundle, Q ⊇ k}| > 1. 11 Formally, two bundles A and B are substitutes if vi(A ∪ B) = max(vi(A), vi(B)), where A ∪ B = L where Lk = max(Ak, Bk). 4. SUPER PROXIES AND OPTIONS The novel solution proposed in this work to resolve the sequential auction problem consists of two primary components: richer proxy agents, and options with price matching. In finance, a real option is a right to acquire a real good at a certain price, called the exercise price. For instance, Alice may obtain from Bob the right to buy Sand from him at an exercise price of$1, 000. An option provides the right to purchase a good at an exercise price but not the obligation. This flexibility allows buyers to put together a collection of options on goods and then decide which to exercise. Options are typically sold at a price called the option price. However, options obtained at a non-zero option price cannot generally support a simple, dominant bidding strategy, as a buyer must compute the expected value of an option to justify the cost [8]. This computation requires a model of the future, which in our setting requires a model of the bidding strategies and the values of other bidders. This is the very kind of game-theoretic reasoning that we want to avoid. Instead, we consider costless options with an option price of zero. This will require some care as buyers are weakly better off with a costless option than without one, whatever its exercise price. However, multiple bidders pursuing options with no intention of exercising them would cause the efficiency of an auction for options to unravel. This is the role of the mandatory proxy agents, which intermediate between buyers and the market. A proxy agent forces a link between the valuation function used to acquire options and the valuation used to exercise options. If a buyer tells her proxy an inflated value for an item, she runs the risk of having the proxy exercise options at a price greater than her value. 4.1 Buyer Proxies 4.1.1 Acquiring Options After her arrival, a buyer submits her valuation ˆvi (perhaps untruthfully) to her proxy in some period ˆai ≥ ai, along with a claim about her departure time ˆdi ≥ ˆai. All transactions are intermediated via proxy agents. Each auction is modified to sell an option on that good to the highest bidding proxy, with an initial exercise price set to the second-highest bid received.12 When an option in which a buyer is interested becomes available for the first time, the proxy determines its bid by computing the buyers maximum marginal value for the item, and then submits a bid in this amount. A proxy does not bid for an item when it already holds an option. The bid price is: bidt i(k) = max L [ˆvi(L + k) − ˆvi(L)] (1) By having a proxy compute a buyers maximum marginal value for an item and then bidding only that amount, a buyers proxy will win any auction that could possibly be of benefit to the buyer and only lose those auctions that could never be of value to the buyer. 12 The system can set a reserve price for each good, provided that the reserve is universal for all auctions selling the same item. Without a universal reserve price, price matching is not possible because of the additional restrictions on prices that individual sellers will accept. 183 Buyer Type Monday Tuesday Molly (Mon, Tues, $8) 6Nancy 6Nancy → 4Polly Nancy (Mon, Tues,$6) - 4Polly Polly (Mon, Tues, $4) -Table 1: Three-buyer example with each wanting a single item and one auction occurring on Monday and Tuesday. XY implies an option with exercise price X and bookkeeping that a proxy has prevented Y from currently possessing an option. → is the updating of exercise price and bookkeeping. When a proxy wins an auction for an option, the proxy will store in its local memory the identity (which may be a pseudonym) of the proxy not holding an option because of the proxys win (i.e., the proxy that it bumped'' from winning, if any). This information will be used for price matching. 4.1.2 Pricing Options Sellers agree by joining the market to allow the proxy representing a buyer to adjust the exercise price of an option that it holds downwards if the proxy discovers that it could have achieved a better price by waiting to bid in a later auction for an option on the same good. To assist in the implementation of the price matching scheme each proxy tracks future auctions for an option that it has already won and will determine who would be bidding in that auction had the proxy delayed its entry into the market until this later auction. The proxy will request price matching from the seller that granted it an option if the proxy discovers that it could have secured a lower price by waiting. To reiterate, the proxy does not acquire more than one option for any good. Rather, it reduces the exercise price on its already issued option if a better deal is found. The proxy is able to discover these deals by asking each future auction to report the identities of the bidders in that auction together with their bids. This needs to be enforced by eBay, as the central authority. The highest bidder in this later auction, across those whose identity is not stored in the proxys memory for the given item, is exactly the bidder against whom the proxy would be competing had it delayed its entry until this auction. If this high bid is lower than the current option price held, the proxy price matches down to this high bid price. After price matching, one of two adjustments will be made by the proxy for bookkeeping purposes. If the winner of the auction is the bidder whose identity has been in the proxys local memory, the proxy will replace that local information with the identity of the bidder whose bid it just price matched, as that is now the bidder the proxy has prevented from obtaining an option. If the auction winners identity is not stored in the proxys local memory the memory may be cleared. In this case, the proxy will simply price match against the bids of future auction winners on this item until the proxy departs. Example 2 (Table 1). Mollys proxy wins the Monday auction, submitting a bid of$8 and receiving an option for $6. Mollys proxy adds Nancy to its local memory as Nancys proxy would have won had Mollys proxy not bid. On Tuesday, only Nancys and Pollys proxy bid (as Mollys proxy holds an option), with Nancys proxy winning an opBuyer Type Monday Tuesday Truth: Molly (Mon, Mon,$8) 6NancyNancy (Mon, Tues, $6) - 4Polly Polly (Mon, Tues,$4) -Misreport: Molly (Mon, Mon, $8) -Nancy (Mon, Tues,$10) 8Molly 8Molly → 4φ Polly (Mon, Tues, $4) - 0φ Misreport & match low: Molly (Mon, Mon,$8) -Nancy (Mon, Tues, $10) 8 8 → 0 Polly (Mon, Tues,$4) - 0 Table 2: Examples demonstrating why bookkeeping will lead to a truthful system whereas simply matching to the lowest winning price will not. tion for $4 and noting that it bumped Pollys proxy. At this time, Mollys proxy will price match its option down to$4 and replace Nancy with Polly in its local memory as per the price match algorithm, as Polly would be holding an option had Molly never bid. 4.1.3 Exercising Options At the reported departure time the proxy chooses which options to exercise. Therefore, a seller of an option must wait until period ˆdw for the option to be exercised and receive payment, where w was the winner of the option.13 For bidder i, in period ˆdi, the proxy chooses the option(s) that maximize the (reported) utility of the buyer: θ∗ t = argmax θ⊆Θ (ˆvi(γ(θ)) − π(θ)) (2) where Θ is the set of all options held, γ(θ) are the goods corresponding to a set of options, and π(θ) is the sum of exercise prices for a set of options. All other options are returned.14 No options are exercised when no combination of options have positive utility. 4.1.4 Why bookkeep and not match winning price? One may believe that an alternative method for implementing a price matching scheme could be to simply have proxies match the lowest winning price they observe after winning an option. However, as demonstrated in Table 2, such a simple price matching scheme will not lead to a truthful system. The first scenario in Table 2 demonstrates the outcome if all agents were to truthfully report their types. Molly 13 While this appears restrictive on the seller, we believe it not significantly different than what sellers on eBay currently endure in practice. An auction on eBay closes at a specific time, but a seller must wait until a buyer relinquishes payment before being able to realize the revenue, an amount of time that could easily be days (if payment is via a money order sent through courier) to much longer (if a buyer is slow but not overtly delinquent in remitting her payment). 14 Presumably, an option returned will result in the seller holding a new auction for an option on the item it still possesses. However, the system will not allow a seller to re-auction an option until ΔMax after the option had first been issued in order to maintain a truthful mechanism. 184 would win the Monday auction and receive an option with an exercise price of $6 (subsequently exercising that option at the end of Monday), and Nancy would win the Tuesday auction and receive an option with an exercise price of$4 (subsequently exercising that option at the end of Tuesday). The second scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, using the proposed bookkeeping method. Nancy would win the Monday auction and receive an option with an exercise price of$8. On Tuesday, Polly would win the auction and receive an option with an exercise price of $0. Nancys proxy would observe that the highest bid submitted on Tuesday among those proxies not stored in local memory is Pollys bid of$4, and so Nancys proxy would price match the exercise price of its option down to $4. Note that the exercise price Nancys proxy has obtained at the end of Tuesday is the same as when she truthfully revealed her type to her proxy. The third scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of$10, if the price matching scheme were for proxies to simply match their option price to the lowest winning price at any time while they are in the system. Nancy would win the Monday auction and receive an option with an exercise price of $8. On Tuesday, Polly would win the auction and receive an option with an exercise price of$0. Nancys proxy would observe that the lowest price on Tuesday was $0, and so Nancys proxy would price match the exercise price of its option down to$0. Note that the exercise price Nancys proxy has obtained at the end of Tuesday is lower than when she truthfully revealed her type to the proxy. Therefore, a price matching policy of simply matching the lowest price paid may not elicit truthful information from buyers. 4.2 Complexity of Algorithm An XOR-valuation of size M for buyer i is a set of M terms, < L1 , v1 i > ...< LM , vM i >, that maps distinct bundles to values, where i is interested in acquiring at most one such bundle. For any bundle S, vi(S) = maxLm⊆S(vm i ). Theorem 1. Given an XOR-valuation which possesses M terms, there is an O(KM2 ) algorithm for computing all maximum marginal values, where K is the number of different item types in which a buyer may be interested. Proof. For each item type, recall Equation 1 which defines the maximum marginal value of an item. For each bundle L in the M-term valuation, vi(L + k) may be found by iterating over the M terms. Therefore, the number of terms explored to determine the maximum marginal value for any item is O(M2 ), and so the total number of bundle comparisons to be performed to calculate all maximum marginal values is O(KM2 ). Theorem 2. The total memory required by a proxy for implementing price matching is O(K), where K is the number of distinct item types. The total work performed by a proxy to conduct price matching in each auction is O(1). Proof. By construction of the algorithm, the proxy stores one maximum marginal value for each item for bidding, of which there are O(K); at most one buyers identity for each item, of which there are O(K); and one current option exercise price for each item, of which there are O(K). For each auction, the proxy either submits a precomputed bid or price matches, both of which take O(1) work. 4.3 Truthful Bidding to the Proxy Agent Proxies transform the market into a direct revelation mechanism, where each buyer i interacts with the proxy only once,15 and does so by declaring a bid, bi, which is defined as an announcement of her type, (ˆai, ˆdi, ˆvi), where the announcement may or may not be truthful. We denote all received bids other than is as b−i. Given bids, b = (bi, b−i), the market determines allocations, xi(b), and payments, pi(b) ≥ 0, to each buyer (using an online algorithm). A dominant strategy equilibrium for buyers requires that vi(xi(bi, b−i))−pi(bi, b−i) ≥ vi(xi(bi, b−i))−pi(bi, b−i), ∀bi = bi, ∀b−i. We now establish that it is a dominant strategy for a buyer to reveal her true valuation and true departure time to her proxy agent immediately upon arrival to the system. The proof builds on the price-based characterization of strategyproof single-item online auctions in Hajiaghayi et al. [12]. Define a monotonic and value-independent price function psi(ai, di, L, v−i) which can depend on the values of other agents v−i. Price psi(ai, di, L, v−i) will represent the price available to agent i for bundle L in the mechanism if it announces arrival time ai and departure time di. The price is independent of the value vi of agent i, but can depend on ai, di and L as long as it satisfies a monotonicity condition. Definition 2. Price function psi(ai, di, L, v−i) is monotonic if psi(ai, di, L , v−i) ≤ psi(ai, di, L, v−i) for all ai ≤ ai, all di ≥ di, all bundles L ⊆ L and all v−i. Lemma 1. An online combinatorial auction will be strategyproof (with truthful reports of arrival, departure and value a dominant strategy) when there exists a monotonic and value-independent price function, psi(ai, di, L, v−i), such that for all i and all ai, di ∈ T and all vi, agent i is allocated bundle L∗ = argmaxL [vi(L) − psi(ai, di, L, v−i)] in period di and makes payment psi(ai, di, L∗ , v−i). Proof. Agent i cannot benefit from reporting a later departure ˆdi because the allocation is made in period ˆdi and the agent would have no value for this allocation. Agent i cannot benefit from reporting a later arrival ˆai ≥ ai or earlier departure ˆdi ≤ di because of price monotonicity. Finally, the agent cannot benefit from reporting some ˆvi = vi because its reported valuation does not change the prices it faces and the mechanism maximizes its utility given its reported valuation and given the prices. Lemma 2. At any given time, there is at most one buyer in the system whose proxy does not hold an option for a given item type because of buyer is presence in the system, and the identity of that buyer will be stored in is proxys local memory at that time if such a buyer exists. Proof. By induction. Consider the first proxy that a buyer prevents from winning an option. Either (a) the 15 For analysis purposes, we view the mechanism as an opaque market so that the buyer cannot condition her bid on bids placed by others. 185 bumped proxy will leave the system having never won an option, or (b) the bumped proxy will win an auction in the future. If (a), the buyers presence prevented exactly that one buyer from winning an option, but will have not prevented any other proxies from winning an option (as the buyers proxy will not bid on additional options upon securing one), and will have had that bumped proxys identity in its local memory by definition of the algorithm. If (b), the buyer has not prevented the bumped proxy from winning an option after all, but rather has prevented only the proxy that lost to the bumped proxy from winning (if any), whose identity will now be stored in the proxys local memory by definition of the algorithm. For this new identity in the buyers proxys local memory, either scenario (a) or (b) will be true, ad infinitum. Given this, we show that the options-based infrastructure implements a price-based auction with a monotonic and value-independent price schedule to every agent. Theorem 3. Truthful revelation of valuation, arrival and departure is a dominant strategy for a buyer in the optionsbased market. Proof. First, define a simple agent-independent price function pk i (t, v−i) as the highest bid by the proxies not holding an option on an item of type Gk at time t, not including the proxy representing i herself and not including any proxies that would have already won an option had i never entered the system (i.e., whose identity is stored in is proxys local memory)(∞ if no supply at t). This set of proxies is independent of any declaration i makes to its proxy (as the set explicitly excludes the at most one proxy (see Lemma 2) that i has prevented from holding an option), and each bid submitted by a proxy within this set is only a function of their own buyers declared valuation (see Equation 1). Furthermore, i cannot influence the supply she faces as any options returned by bidders due to a price set by is proxys bid will be re-auctioned after i has departed the system. Therefore, pk i (t, v−i) is independent of is declaration to its proxy. Next, define psk i (ˆai, ˆdi, v−i) = minˆai≤τ≤ ˆdi [pk i (τ, v−i)] (possibly ∞) as the minimum price over pk i (t, v−i), which is clearly monotonic. By construction of price matching, this is exactly the price obtained by a proxy on any option that it holds at departure. Define psi(ˆai, ˆdi, L, v−i) = Èk=K k=1 psk i (ˆai, ˆdi, v−i)Lk, which is monotonic in ˆai, ˆdi and L since psk i (ˆai, ˆdi, v−i) is monotonic in ˆai and ˆdi and (weakly) greater than zero for each k. Given the set of options held at ˆdi, which may be a subset of those items with non-infinite prices, the proxy exercises options to maximize the reported utility. Left to show is that all bundles that could not be obtained with options held are priced sufficiently high as to not be preferred. For each such bundle, either there is an item priced at ∞ (in which case the bundle would not be desired) or there must be an item in that bundle for which the proxy does not hold an option that was available. In all auctions for such an item there must have been a distinct bidder with a bid greater than bidt i(k), which subsequently results in psk i (ˆai, ˆdi, v−i) > bidt i(k), and so the bundle without k would be preferred to the bundle. Theorem 4. The super proxy, options-based scheme is individually-rational for both buyers and sellers. Price σ(Price) Value Surplus eBay $240.24$32 $244$4 Options $239.66$12 $263$23 Table 3: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as the simulated options-based market using worst-case estimates of bidders'' true value. Proof. By construction, the proxy exercises the profit maximizing set of options obtained, or no options if no set of options derives non-negative surplus. Therefore, buyers are guaranteed non-negative surplus by participating in the scheme. For sellers, the price of each option is based on a non-negative bid or zero. 5. EVALUATING THE OPTIONS / PROXY INFRASTRUCTURE A goal of the empirical benchmarking and a reason to collect data from eBay is to try and build a realistic model of buyers from which to estimate seller revenue and other market effects under the options-based scheme. We simulate a sequence of auctions that match the timing of the Dell LCD auctions on eBay.16 When an auction successfully closes on eBay, we simulate a Vickrey auction for an option on the item sold in that period. Auctions that do not successfully close on eBay are not simulated. We estimate the arrival, departure and value of each bidder on eBay from their observed behavior.17 Arrival is estimated as the first time that a bidder interacts with the eBay proxy, while departure is estimated as the latest closing time among eBay auctions in which a bidder participates. We initially adopt a particularly conservative estimate for bidder value, estimating it as the highest bid a bidder was observed to make on eBay. Table 3 compares the distribution of closing prices on eBay and in the simulated options scheme. While the average revenue in both schemes is virtually the same ($239.66 in the options scheme vs.$240.24 on eBay), the winners in the options scheme tend to value the item won 7% more than the winners on eBay ($263 in the options scheme vs.$244 on eBay). 5.1 Bid Identification We extend the work of Haile and Tamer [11] to sequential auctions to get a better view of underlying bidder values. Rather than assume for bidders an equilibrium behavior as in standard econometric techniques, Haile and Tamer do not attempt to model how bidders'' true values get mapped into a bid in any given auction. Rather, in the context of repeated 16 When running the simulations, the results of the first and final ten days of auctions are not recorded to reduce edge effects that come from viewing a discrete time window of a continuous process. 17 For the 100 bidders that won multiple times on eBay, we have each one bid a constant marginal value for each additional item in each auction until the number of options held equals the total number of LCDs won on eBay, with each option available for price matching independently. This bidding strategy is not a dominant strategy (falling outside the type space possible for buyers on which the proof of truthfulness has been built), but is believed to be the most appropriate first order action for simulation. 186 0 100 200 300 400 500 0 0.2 0.4 0.6 0.8 1 Value ($) CDF Observed Max Bids Upper Bound of True Value Figure 2: CDF of maximum bids observed and upper bound estimate of the bidding populations distribution for maximum willingness to pay. The true population distribution lies below the estimated upper bound. single-item auctions with distinct bidder populations, Haile and Tamer make only the following two assumptions when estimating the distribution of true bidder values: 1. Bidders do not bid more than they are willing to pay. 2. Bidders do not allow an opponent to win at a price they are willing to beat. From the first of their two assumptions, given the bids placed by each bidder in each auction, Haile and Tamer derive a method for estimating an upper bound of the bidding populations true value distribution (i.e., the bound that lies above the true value distribution). From the second of their two assumptions, given the winning price of each auction, Haile and Tamer derive a method for estimating a lower bound of the bidding populations true value distribution. It is only the upper-bound of the distribution that we utilize in our work. Haile and Tamer assume that bidders only participate in a single auction, and require independence of the bidding population from auction to auction. Neither assumption is valid here: the former because bidders are known to bid in more than one auction, and the latter because the set of bidders in an auction is in all likelihood not a true i.i.d. sampling of the overall bidding population. In particular, those who win auctions are less likely to bid in successive auctions, while those who lose auctions are more likely to remain bidders in future auctions. In applying their methods we make the following adjustments: • Within a given auction, each individual bidders true willingness to pay is assumed weakly greater than the maximum bid that bidder submits across all auctions for that item (either past or future). • When estimating the upper bound of the value distribution, if a bidder bids in more than one auction, randomly select one of the auctions in which the bidder bid, and only utilize that one observation during the estimation.18 18 In current work, we assume that removing duplicate bidders is sufficient to make the buying populations independent i.i.d. draws from auction to auction. If one believes that certain portions of the population are drawn to cerPrice σ(Price) Value Surplus eBay$240.24 $32$281 $40 Options$275.80 $14$302 $26 Table 4: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as in the simulated options-based market using an adjusted Haile and Tamer estimate of bidders'' true values being 15% higher than their maximum observed bid. Figure 2 provides the distribution of maximum bids placed by bidders on eBay as well as the estimated upper bound of the true value distribution of bidders based on the extended Haile and Tamer method.19 As can be seen, the smallest relative gap between the two curves meaningfully occurs near the 80th percentile, where the upper bound is 1.17 times the maximum bid. Therefore, adopted as a less conservative model of bidder values is a uniform scaling factor of 1.15. We now present results from this less conservative analysis. Table 4 shows the distribution of closing prices in auctions on eBay and in the simulated options scheme. The mean price in the options scheme is now significantly higher, 15% greater, than the prices on eBay ($276 in the options scheme vs. $240 on eBay), while the standard deviation of closing prices is lower among the options scheme auctions ($14 in the options scheme vs. $32 on eBay). Therefore, not only is the expected revenue stream higher, but the lower variance provides sellers a greater likelihood of realizing that higher revenue. The efficiency of the options scheme remains higher than on eBay. The winners in the options scheme now have an average estimated value 7.5% higher at$302. In an effort to better understand this efficiency, we formulated a mixed integer program (MIP) to determine a simple estimate of the allocative efficiency of eBay. The MIP computes the efficient value of the oﬄine problem with full hindsight on all bids and all supply.20 Using a scaling of 1.15, the total value allocated to eBay winners is estimated at $551,242, while the optimal value (from the MIP) is$593,301. This suggests an allocative efficiency of 92.9%: while the typical value of a winner on eBay is $281, an average value of$303 was possible.21 Note the options-based tain auctions, then further adjustments would be required in order to utilize these techniques. 19 The estimation of the points in the curve is a minimization over many variables, many of which can have smallnumbers bias. Consequently, Haile and Tamer suggest using a weighted average over all terms yi of È i yi exp(yiρ)Èj exp(yj ρ) to approximate the minimum while reducing the small number effects. We used ρ = −1000 and removed observations of auctions with 17 bidders or more as they occurred very infrequently. However, some small numbers bias still demonstrated itself with the plateau in our upper bound estimate around a value of $300. 20 Buyers who won more than one item on eBay are cloned so that they appear to be multiple bidders of identical type. 21 As long as one believes that every bidders true value is a constant factor α away from their observed maximum bid, the 92.9% efficiency calculation holds for any value of α. In practice, this belief may not be reasonable. For example, if losing bidders tend to have true values close to their observed 187 scheme comes very close to achieving this level of efficiency [at 99.7% efficient in this estimate] even though it operates without the benefit of hindsight. Finally, although the typical winning bidder surplus decreases between eBay and the options-based scheme, some surplus redistribution would be possible because the total market efficiency is improved.22 6. DISCUSSION The biggest concern with our scheme is that proxy agents who may be interested in many different items may acquire many more options than they finally exercise. This can lead to efficiency loss. Notice that this is not an issue when bidders are only interested in a single item (as in our empirical study), or have linear-additive values on items. To fix this, we would prefer to have proxy agents use more caution in acquiring options and use a more adaptive bidding strategy than that in Equation 1. For instance, if a proxy is already holding an option with an exercise price of$3 on some item for which it has value of $10, and it values some substitute item at$5, the proxy could reason that in no circumstance will it be useful to acquire an option on the second item. We formulate a more sophisticated bidding strategy along these lines. Let Θt be the set of all options a proxy for bidder i already possesses at time t. Let θt ⊆ Θt, be a subset of those options, the sum of whose exercise prices are π(θt), and the goods corresponding to those options being γ(θt). Let Π(θt) = ˆvi(γ(θt)) − π(θt) be the (reported) available surplus associated with a set of options. Let θ∗ t be the set of options currently held that would maximize the buyers surplus; i.e., θ∗ t = argmaxθt⊆Θt Π(θt). Let the maximal willingness to pay for an item k represent a price above which the agent knows it would never exercise an option on the item given the current options held. This can be computed as follows: bidt i(k) = max L [0, min[ˆvi(L + k) − Π(θ∗ t ), ˆvi(L + k) − ˆvi(L)]] (3) where ˆvi(L+k)−Π(θ∗ t ) considers surplus already held, ˆvi(L+ k)−ˆvi(L) considers the marginal value of a good, and taking the max[0, .] considers the overall use of pursuing the good. However, and somewhat counter intuitively, we are not able to implement this bidding scheme without forfeiting truthfulness. The Π(θ∗ t ) term in Equation 3 (i.e., the amount of guaranteed surplus bidder i has already obtained) can be influenced by proxy js bid. Therefore, bidder j may have the incentive to misrepresent her valuation to her proxy if she believes doing so will cause i to bid differently in the future in a manner beneficial to j. Consider the following example where the proxy scheme is refined to bid the maximum willingness to pay. Example 3. Alice values either one ton of Sand or one ton of Stone for $2,000. Bob values either one ton of Sand or one ton of Stone for$1,500. All bidders have a patience maximum bids while eBay winners have true values much greater than their observed maximum bids then downward bias is introduced in the efficiency calculation at present. 22 The increase in eBay winner surplus between Tables 3 and 4 is to be expected as the α scaling strictly increases the estimated value of the eBay winners while holding the prices at which they won constant. of 2 days. On day one, a Sand auction is held, where Alices proxy bids $2,000 and Bobs bids$1,500. Alices proxy wins an option to purchase Sand for $1,500. On day two, a Stone auction is held, where Alices proxy bids$1,500 [as she has already obtained a guaranteed $500 of surplus from winning a Sand option, and so reduces her Stone bid by this amount], and Bobs bids$1,500. Either Alices proxy or Bobs proxy will win the Stone option. At the end of the second day, Alices proxy holds an option with an exercise price of $1,500 to obtain a good valued for$2,000, and so obtains $500 in surplus. Now, consider what would have happened had Alice declared that she valued only Stone. Example 4. Alice declares valuing only Stone for$2,000. Bob values either one ton of Sand or one ton of Stone for $1,500. All bidders have a patience of 2 days. On day one, a Sand auction is held, where Bobs proxy bids$1,500. Bobs proxy wins an option to purchase Sand for $0. On day two, a Stone auction is held, where Alices proxy bids$2,000, and Bobs bids $0 [as he has already obtained a guaranteed$1,500 of surplus from winning a Sand option, and so reduces his Stone bid by this amount]. Alices proxy wins the Stone option for $0. At the end of the second day, Alices proxy holds an option with an exercise price of$0 to obtain a good valued for $2,000, and so obtains$2,000 in surplus. By misrepresenting her valuation (i.e., excluding her value of Sand), Alice was able to secure higher surplus by guiding Bobs bid for Stone to $0. An area of immediate further work by the authors is to develop a more sophisticated proxy agent that can allow for bidding of maximum willingness to pay (Equation 3) while maintaining truthfulness. An additional, practical, concern with our proxy scheme is that we assume an available, trusted, and well understood method to characterize goods (and presumably the quality of goods). We envision this happening in practice by sellers defining a classification for their item upon entering the market, for instance via a UPC code. Just as in eBay, this would allow an opportunity for sellers to improve revenue by overstating the quality of their item (new vs. like new), and raises the issue of how well a reputation scheme could address this. 7. CONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods. Our scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic. In addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers. For instance, does the options scheme change seller incentives from what they currently are on eBay? Acknowledgments We would like to thank Pai-Ling Yin. Helpful comments have been received from William Simpson, attendees at Har188 vard Universitys EconCS and ITM seminars, and anonymous reviewers. Thank you to Aaron L. Roth and KangXing Jin for technical support. All errors and omissions remain our own. 8. REFERENCES [1] P. Anthony and N. R. Jennings. Developing a bidding agent for multiple heterogeneous auctions. ACM Trans. On Internet Technology, 2003. [2] R. Bapna, P. Goes, A. Gupta, and Y. Jin. User heterogeneity and its impact on electronic auction market design: An empirical exploration. MIS Quarterly, 28(1):21-43, 2004. [3] D. Bertsimas, J. Hawkins, and G. Perakis. Optimal bidding in on-line auctions. Working Paper, 2002. [4] C. Boutilier, M. Goldszmidt, and B. Sabata. Sequential auctions for the allocation of resources with complementarities. In Proc. 16th International Joint Conference on Artificial Intelligence (IJCAI-99), pages 527-534, 1999. [5] A. Byde, C. Preist, and N. R. Jennings. Decision procedures for multiple auctions. In Proc. 1st Int. Joint Conf. on Autonomous Agents and Multiagent Systems (AAMAS-02), 2002. [6] M. M. Bykowsky, R. J. Cull, and J. O. Ledyard. Mutually destructive bidding: The FCC auction design problem. Journal of Regulatory Economics, 17(3):205-228, 2000. [7] Y. Chen, C. Narasimhan, and Z. J. Zhang. Consumer heterogeneity and competitive price-matching guarantees. Marketing Science, 20(3):300-314, 2001. [8] A. K. Dixit and R. S. Pindyck. Investment under Uncertainty. Princeton University Press, 1994. [9] R. Gopal, S. Thompson, Y. A. Tung, and A. B. Whinston. Managing risks in multiple online auctions: An options approach. Decision Sciences, 36(3):397-425, 2005. [10] A. Greenwald and J. O. Kephart. Shopbots and pricebots. In Proc. 16th International Joint Conference on Artificial Intelligence (IJCAI-99), pages 506-511, 1999. [11] P. A. Haile and E. Tamer. Inference with an incomplete model of English auctions. Journal of Political Economy, 11(1), 2003. [12] M. T. Hajiaghayi, R. Kleinberg, M. Mahdian, and D. C. Parkes. Online auctions with re-usable goods. In Proc. ACM Conf. on Electronic Commerce, 2005. [13] K. Hendricks, I. Onur, and T. Wiseman. Preemption and delay in eBay auctions. University of Texas at Austin Working Paper, 2005. [14] A. Iwasaki, M. Yokoo, and K. Terada. A robust open ascending-price multi-unit auction protocol against false-name bids. Decision Support Systems, 39:23-40, 2005. [15] E. G. James D. Hess. Price-matching policies: An empirical case. Managerial and Decision Economics, 12(4):305-315, 1991. [16] A. X. Jiang and K. Leyton-Brown. Estimating bidders'' valuation distributions in online auctions. In Workshop on Game Theory and Decision Theory (GTDT) at IJCAI, 2005. [17] R. Lavi and N. Nisan. Competitive analysis of incentive compatible on-line auctions. In Proc. 2nd ACM Conf. on Electronic Commerce (EC-00), 2000. [18] Y. J. Lin. Price matching in a model of equilibrium price dispersion. Southern Economic Journal, 55(1):57-69, 1988. [19] D. Lucking-Reiley and D. F. Spulber. Business-to-business electronic commerce. Journal of Economic Perspectives, 15(1):55-68, 2001. [20] A. Ockenfels and A. Roth. Last-minute bidding and the rules for ending second-price auctions: Evidence from eBay and Amazon auctions on the Internet. American Economic Review, 92(4):1093-1103, 2002. [21] M. Peters and S. Severinov. Internet auctions with many traders. Journal of Economic Theory (Forthcoming), 2005. [22] R. Porter. Mechanism design for online real-time scheduling. In Proceedings of the 5th ACM conference on Electronic commerce, pages 61-70. ACM Press, 2004. [23] M. H. Rothkopf and R. Engelbrecht-Wiggans. Innovative approaches to competitive mineral leasing. Resources and Energy, 14:233-248, 1992. [24] T. Sandholm and V. Lesser. Leveled commitment contracts and strategic breach. Games and Economic Behavior, 35:212-270, 2001. [25] T. W. Sandholm and V. R. Lesser. Issues in automated negotiation and electronic commerce: Extending the Contract Net framework. In Proc. 1st International Conference on Multi-Agent Systems (ICMAS-95), pages 328-335, 1995. [26] H. S. Shah, N. R. Joshi, A. Sureka, and P. R. Wurman. Mining for bidding strategies on eBay. Lecture Notes on Artificial Intelligence, 2003. [27] M. Stryszowska. Late and multiple bidding in competing second price Internet auctions. EuroConference on Auctions and Market Design: Theory, Evidence and Applications, 2003. [28] J. T.-Y. Wang. Is last minute bidding bad? UCLA Working Paper, 2003. [29] R. Zeithammer. An equilibrium model of a dynamic auction marketplace. Working Paper, University of Chicago, 2005. 189 The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution * ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. 1. INTRODUCTION Electronic markets represent an application of information systems that has generated significant new trading opportu * A preliminary version of this work appeared in the AMEC workshop in 2004. nities while allowing for the dynamic pricing of goods. In addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]). Many authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1]. There is still little evidence of automated trading in e-markets, though. We believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs. Without this, we do not expect individual consumers, or firms, to be confident in placing their business in the "hands" of an automated agent. One of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was$44B. Among items listed on eBay, many are essentially identical. This is especially true in the Consumer Electronics category [9], which accounted for roughly $3.5 B of eBay's gross merchandise volume in 2005. This presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem. For example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction. While Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the "wrong" auction. This is a problem of multiple copies. Another problem bidders may face is the exposure problem. As investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions .1 For example, if Alice values a video game console by itself for$200, a video game by itself for $30, and both a console and game for$250, Alice must determine how much of the $20 of synergy value she might include in her bid for the console alone. Both problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations. Why might the sequential auction problem be bad? Complex games may lead to bidders employing costly strategies and making mistakes. Potential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions. The problem is also a familiar one of online decision making. market, inhibiting seller revenue opportunities. Additionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities. We are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties. 1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems. Return policies alleviate the exposure problem by allowing customers to return goods at the purchase price. Price matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18]. Furthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item. These two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay. Our "super" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies. The extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions. A seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option. Buyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good (s). The proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions. A proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation. All other options are returned to the market and not exercised. The options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics. We conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005. LCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem. We first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period. This model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market. We also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments. Using this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices. However, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids. from the maximum bid they were observed to have placed on eBay. Based on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient. 1.2 Related Work A number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20]. These papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions. Peters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium. However, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders. Previous work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2]. Previous work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction. Iwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem. In other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24]. Most similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem. However, their work uses costly options and does not remove the sequential bidding problem completely. Work on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time. We leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol. The special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation. Jiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions. 2. EBAY AND THE DELL E193FP The most common type of auction held on eBay is a singleitem proxy auction. Auctions open at a given time and remain open for a set period of time (usually one week). Bidders bid for the item by giving a proxy a value ceiling. The proxy will bid on behalf of the bidder only as much as is necessary to maintain a winning position in the auction, up to the ceiling received from the bidder. Bidders may communicate with the proxy multiple times before an auction closes. In the event that a bidder's proxy has been outbid, a bidder may give the proxy a higher ceiling to use in the auction. eBay's proxy auction implements an incremental version of a Vickrey auction, with the item sold to the highest bidder for the second-highest bid plus a small increment. Figure 1: Histogram of number of LCD auctions available to each bidder and number of LCD auctions in which a bidder participates. The market analyzed in this paper is that of a specific model of an LCD monitor, a 19" Dell LCD model E193FP. This market was selected for a variety of reasons including: • The mean price of the monitor was$240 (with standard deviation $32), so we believe it reasonable to assume that bidders on the whole are only interested in acquiring one copy of the item on eBay .3 • The volume transacted is fairly high, at approximately 500 units sold per month. • The item is not usually bundled with other items. • The item is typically sold "as new," and so suitable for the price-matching of the options-based scheme. Raw auction information was acquired via a PERL script. The script accesses the eBay search engine,4 and returns all auctions containing the terms  Dell' and  LCD' that have closed within the past month .5 Data was stored in a text file for post-processing. To isolate the auctions in the domain of interest, queries were made against the titles of eBay auctions that closed between 27 May, 2005 through 1 October, 2005.6 Figure 1 provides a general sense of how many LCD auctions occur while a bidder is interested in pursuing a monitor .7 8,746 bidders (86%) had more than one auction available between when they first placed a bid on eBay and the 6Specifically, the query found all auctions where the title contained all of the following strings:  Dell,'  LCD' and  E193FP,' while excluding all auctions that contained any of the following strings:  Dimension,'  GHZ,'  desktop,'  p4' and  GB . ' The exclusion terms were incorporated so that the only auctions analyzed would be those selling exclusively the LCD of interest. For example, the few bundled auctions selling both a Dell Dimension desktop and the E193FP LCD are excluded. 7As a reference, most auctions close on eBay between noon and midnight EDT, with almost two auctions for the Dell LCD monitor closing each hour on average during peak time periods. Bidders have an average observed patience of 3.9 days (with a standard deviation of 11.4 days). latest closing time of an auction in which they bid (with an average of 78 auctions available). Figure 1 also illustrates the number of auctions in which each bidder participates. Only 32.3% of bidders who had more than one auction available are observed to bid in more than one auction (bidding in 3.6 auctions on average). A simple regression analysis shows that bidders tend to submit maximal bids to an auction that are$1.22 higher after spending twice as much time in the system, as well as bids that are $0.27 higher in each subsequent auction. Among the 508 bidders that won exactly one monitor and participated in multiple auctions, 201 (40%) paid more than$10 more than the closing price of another auction in which they bid, paying on average $35 more (standard deviation$21) than the closing price of the cheapest auction in which they bid but did not win. Furthermore, among the 2,216 bidders that never won an item despite participating in multiple auctions, 421 (19%) placed a losing bid in one auction that was more than $10 higher than the closing price of another auction in which they bid, submitting a losing bid on average$34 more (standard deviation $23) than the closing price of the cheapest auction in which they bid but did not win. Although these measures do not say a bidder that lost could have definitively won (because we only consider the final winning price and not the bid of the winner to her proxy), or a bidder that won could have secured a better price, this is at least indicative of some bidder mistakes. 3. MODELING THE SEQUENTIAL AUCTION PROBLEM While the eBay analysis was for simple bidders who desire only a single item, let us now consider a more general scenario where people may desire multiple goods of different types, possessing general valuations over those goods. Consider a world with buyers (sometimes called bidders) B and K different types of goods G1...GK. Let T = {0, 1, ...} denote time periods. Let L denote a bundle of goods, represented as a vector of size K, where Lk ∈ {0, 1} denotes the quantity of good type Gk in the bundle .8 The type of a buyer i ∈ B is (ai, di, vi), with arrival time ai ∈ T, departure time di ∈ T, and private valuation vi (L) ≥ 0 for each bundle of goods L received between ai and di, and zero value otherwise. The arrival time models the period in which a buyer first realizes her demand and enters the market, while the departure time models the period in which a buyer loses interest in acquiring the good (s). In settings with general valuations, we need an additional assumption: an upper bound on the difference between a buyer's arrival and departure, denoted ΔMax. Buyers have quasi-linear utilities, so that the utility of buyer i receiving bundle L and paying p, in some period no later than di, is ui (L, p) = vi (L) − p. Each seller j ∈ S brings a single item kj to the market, has no intrinsic value and wants to maximize revenue. Seller j has an arrival time, aj, which models the period in which she is first interested in listing the item, while the departure time, dj, models the latest period in which she is willing to consider having an auction for the item close. A seller will receive payment by the end of the reported departure of the winning buyer. 8We extend notation whereby a single item k of type Gk refers to a vector L: Lk = 1. We say an individual auction in a sequence is locally strategyproof (LSP) if truthful bidding is a dominant strategy for a buyer that can only bid in that auction. Consider the following example to see that LSP is insufficient for the existence of a dominant bidding strategy for buyers facing a sequence of auctions. EXAMPLE 1. Alice values one ton of Sand with one ton of Stone at$2, 000. Bob holds a Vickrey auction for one ton of Sand on Monday and a Vickrey auction for one ton of Stone on Tuesday. Alice has no dominant bidding strategy because she needs to know the price for Stone on Tuesday to know her maximum willingness to pay for Sand on Monday. DEFINITION 1. The sequential auction problem. Given a sequence of auctions, despite each auction being locally strategyproof, a bidder has no dominant bidding strategy. Consider a sequence of auctions. Generally, auctions selling the same item will be uncertainly-ordered, because a buyer will not know the ordering of closing prices among the auctions. Define the interesting bundles for a buyer as all bundles that could maximize the buyer's profit for some combination of auctions and bids of other buyers .9 Within the interesting bundles, say that an item has uncertain marginal value if the marginal value of an item depends on the other goods held by the buyer .10 Say that an item is oversupplied if there is more than one auction offering an item of that type. Say two bundles are substitutes if one of those bundles has the same value as the union of both bundles .11 PROPOSITION 1. Given locally strategyproof single-item auctions, the sequential auction problem exists for a bidder if and only if either of the following two conditions is true: (1) within the set of interesting bundles (a) there are two bundles that are substitutes, (b) there is an item with uncertain marginal value, or (c) there is an item that is over-supplied; (2) a bidder faces competitors' bids that are conditioned on the bidder's past bids. PROOF. (Sketch.) (⇐) A bidder does not have a dominant strategy when (a) she does not know which bundle among substitutes to pursue, (b) she faces the exposure problem, or (c) she faces the multiple copies problem. Additionally, a bidder does not have a dominant strategy when she does not how to optimally influence the bids of competitors. (⇒) By contradiction. A bidder has a dominant strategy to bid its constant marginal value for a given item in each auction available when conditions (1) and (2) are both false. For example, the following buyers all face the sequential auction problem as a result of condition (a), (b) and (c) respectively: a buyer who values one ton of Sand for $1,000, or one ton of Stone for$2,000, but not both Sand and Stone; a buyer who values one ton of Sand for $1,000, one ton of Stone for$300, and one ton of Sand and one ton of Stone for $1,500, and can participate in an auction for Sand before an auction for Stone; a buyer who values one ton of Sand for$1,000 and can participate in many auctions selling Sand. 9Assume that the empty set is an interesting bundle. 10Formally, an item k has uncertain marginal value if Ifm: m = vi (Q)--vi (Q--k),  dQ C _ L E InterestingBundle, Q _ D kJJ> 1. 11Formally, two bundles A and B are substitutes if vi (A U B) = max (vi (A), vi (B)), where A U B = L where Lk = max (Ak, Bk). 4. "SUPER" PROXIES AND OPTIONS The novel solution proposed in this work to resolve the sequential auction problem consists of two primary components: richer proxy agents, and options with price matching. In finance, a real option is a right to acquire a real good at a certain price, called the exercise price. For instance, Alice may obtain from Bob the right to buy Sand from him at an exercise price of $1, 000. An option provides the right to purchase a good at an exercise price but not the obligation. This flexibility allows buyers to put together a collection of options on goods and then decide which to exercise. Options are typically sold at a price called the option price. However, options obtained at a non-zero option price cannot generally support a simple, dominant bidding strategy, as a buyer must compute the expected value of an option to justify the cost [8]. This computation requires a model of the future, which in our setting requires a model of the bidding strategies and the values of other bidders. This is the very kind of game-theoretic reasoning that we want to avoid. Instead, we consider costless options with an option price of zero. This will require some care as buyers are weakly better off with a costless option than without one, whatever its exercise price. However, multiple bidders pursuing options with no intention of exercising them would cause the efficiency of an auction for options to unravel. This is the role of the mandatory proxy agents, which intermediate between buyers and the market. A proxy agent forces a link between the valuation function used to acquire options and the valuation used to exercise options. If a buyer tells her proxy an inflated value for an item, she runs the risk of having the proxy exercise options at a price greater than her value. 4.1 Buyer Proxies 4.1.1 Acquiring Options After her arrival, a buyer submits her valuation ˆvi (perhaps untruthfully) to her proxy in some period ˆai> ai, along with a claim about her departure time ˆdi> ˆai. All transactions are intermediated via proxy agents. Each auction is modified to sell an option on that good to the highest bidding proxy, with an initial exercise price set to the second-highest bid received .12 When an option in which a buyer is interested becomes available for the first time, the proxy determines its bid by computing the buyer's maximum marginal value for the item, and then submits a bid in this amount. A proxy does not bid for an item when it already holds an option. The bid price is: By having a proxy compute a buyer's maximum marginal value for an item and then bidding only that amount, a buyer's proxy will win any auction that could possibly be of benefit to the buyer and only lose those auctions that could never be of value to the buyer. 12The system can set a reserve price for each good, provided that the reserve is universal for all auctions selling the same item. Without a universal reserve price, price matching is not possible because of the additional restrictions on prices that individual sellers will accept. Table 1: Three-buyer example with each wanting a sin gle item and one auction occurring on Monday and Tuesday. "XY" implies an option with exercise price X and bookkeeping that a proxy has prevented Y from currently possessing an option. "→" is the updating of exercise price and bookkeeping. When a proxy wins an auction for an option, the proxy will store in its local memory the identity (which may be a pseudonym) of the proxy not holding an option because of the proxy's win (i.e., the proxy that it  bumped' from winning, if any). This information will be used for price matching. 4.1.2 Pricing Options Sellers agree by joining the market to allow the proxy representing a buyer to adjust the exercise price of an option that it holds downwards if the proxy discovers that it could have achieved a better price by waiting to bid in a later auction for an option on the same good. To assist in the implementation of the price matching scheme each proxy tracks future auctions for an option that it has already won and will determine who would be bidding in that auction had the proxy delayed its entry into the market until this later auction. The proxy will request price matching from the seller that granted it an option if the proxy discovers that it could have secured a lower price by waiting. To reiterate, the proxy does not acquire more than one option for any good. Rather, it reduces the exercise price on its already issued option if a better deal is found. The proxy is able to discover these deals by asking each future auction to report the identities of the bidders in that auction together with their bids. This needs to be enforced by eBay, as the central authority. The highest bidder in this later auction, across those whose identity is not stored in the proxy's memory for the given item, is exactly the bidder against whom the proxy would be competing had it delayed its entry until this auction. If this high bid is lower than the current option price held, the proxy "price matches" down to this high bid price. After price matching, one of two adjustments will be made by the proxy for bookkeeping purposes. If the winner of the auction is the bidder whose identity has been in the proxy's local memory, the proxy will replace that local information with the identity of the bidder whose bid it just price matched, as that is now the bidder the proxy has prevented from obtaining an option. If the auction winner's identity is not stored in the proxy's local memory the memory may be cleared. In this case, the proxy will simply price match against the bids of future auction winners on this item until the proxy departs. EXAMPLE 2 (TABLE 1). Molly's proxy wins the Monday auction, submitting a bid of$8 and receiving an option for $6. Molly's proxy adds Nancy to its local memory as Nancy's proxy would have won had Molly's proxy not bid. On Tuesday, only Nancy's and Polly's proxy bid (as Molly's proxy holds an option), with Nancy's proxy winning an op Table 2: Examples demonstrating why bookkeeping will lead to a truthful system whereas simply matching to the lowest winning price will not. tion for$4 and noting that it bumped Polly's proxy. At this time, Molly's proxy will price match its option down to $4 and replace Nancy with Polly in its local memory as per the price match algorithm, as Polly would be holding an option had Molly never bid. 4.1.3 Exercising Options At the reported departure time the proxy chooses which options to exercise. Therefore, a seller of an option must wait until period ˆdw for the option to be exercised and receive payment, where w was the winner of the option .13 For bidder i, in period ˆdi, the proxy chooses the option (s) that maximize the (reported) utility of the buyer: where Θ is the set of all options held, γ (θ) are the goods corresponding to a set of options, and π (θ) is the sum of exercise prices for a set of options. All other options are returned .14 No options are exercised when no combination of options have positive utility. 4.1.4 Why bookkeep and not match winning price? One may believe that an alternative method for implementing a price matching scheme could be to simply have proxies match the lowest winning price they observe after winning an option. However, as demonstrated in Table 2, such a simple price matching scheme will not lead to a truthful system. The first scenario in Table 2 demonstrates the outcome if all agents were to truthfully report their types. Molly 13While this appears restrictive on the seller, we believe it not significantly different than what sellers on eBay currently endure in practice. An auction on eBay closes at a specific time, but a seller must wait until a buyer relinquishes payment before being able to realize the revenue, an amount of time that could easily be days (if payment is via a money order sent through courier) to much longer (if a buyer is slow but not overtly delinquent in remitting her payment). 14Presumably, an option returned will result in the seller holding a new auction for an option on the item it still possesses. However, the system will not allow a seller to re-auction an option until ΔMax after the option had first been issued in order to maintain a truthful mechanism. would win the Monday auction and receive an option with an exercise price of$6 (subsequently exercising that option at the end of Monday), and Nancy would win the Tuesday auction and receive an option with an exercise price of $4 (subsequently exercising that option at the end of Tuesday). The second scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of$10, using the proposed bookkeeping method. Nancy would win the Monday auction and receive an option with an exercise price of $8. On Tuesday, Polly would win the auction and receive an option with an exercise price of$0. Nancy's proxy would observe that the highest bid submitted on Tuesday among those proxies not stored in local memory is Polly's bid of $4, and so Nancy's proxy would price match the exercise price of its option down to$4. Note that the exercise price Nancy's proxy has obtained at the end of Tuesday is the same as when she truthfully revealed her type to her proxy. The third scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, if the price matching scheme were for proxies to simply match their option price to the lowest winning price at any time while they are in the system. Nancy would win the Monday auction and receive an option with an exercise price of$8. On Tuesday, Polly would win the auction and receive an option with an exercise price of $0. Nancy's proxy would observe that the lowest price on Tuesday was$0, and so Nancy's proxy would price match the exercise price of its option down to $0. Note that the exercise price Nancy's proxy has obtained at the end of Tuesday is lower than when she truthfully revealed her type to the proxy. Therefore, a price matching policy of simply matching the lowest price paid may not elicit truthful information from buyers. 4.2 Complexity of Algorithm An XOR-valuation of size M for buyer i is a set of M terms, <L1, v1i>... <LM, vMi>, that maps distinct bundles to values, where i is interested in acquiring at most one such bundle. For any bundle S, vi (S) = maxLm ⊆ S (vmi). THEOREM 1. Given an XOR-valuation which possesses M terms, there is an O (KM2) algorithm for computing all maximum marginal values, where K is the number of different item types in which a buyer may be interested. PROOF. For each item type, recall Equation 1 which defines the maximum marginal value of an item. For each bundle L in the M-term valuation, vi (L + k) may be found by iterating over the M terms. Therefore, the number of terms explored to determine the maximum marginal value for any item is O (M2), and so the total number of bundle comparisons to be performed to calculate all maximum marginal values is O (KM2). THEOREM 2. The total memory required by a proxy for implementing price matching is O (K), where K is the number of distinct item types. The total work performed by a proxy to conduct price matching in each auction is O (1). PROOF. By construction of the algorithm, the proxy stores one maximum marginal value for each item for bidding, of which there are O (K); at most one buyer's identity for each item, of which there are O (K); and one current option exercise price for each item, of which there are O (K). For each auction, the proxy either submits a precomputed bid or price matches, both of which take O (1) work. 4.3 Truthful Bidding to the Proxy Agent Proxies transform the market into a direct revelation mechanism, where each buyer i interacts with the proxy only once,15 and does so by declaring a bid, bi, which is defined as an announcement of her type, (ˆai, ˆdi, ˆvi), where the announcement may or may not be truthful. We denote all received bids other than i's as b − i. Given bids, b = (bi, b − i), the market determines allocations, xi (b), and payments, pi (b)> 0, to each buyer (using an online algorithm). A dominant strategy equilibrium for buyers requires that vi (xi (bi, b − i))--pi (bi, b − i)> vi (xi (b ~ i, b − i))--pi (b ~ i, b − i),  db ~ i = ~ bi,  db − i.We now establish that it is a dominant strategy for a buyer to reveal her true valuation and true departure time to her proxy agent immediately upon arrival to the system. The proof builds on the price-based characterization of strategyproof single-item online auctions in Hajiaghayi et al. [12]. Define a monotonic and value-independent price function psi (ai, di, L, v − i) which can depend on the values of other agents v − i. Price psi (ai, di, L, v − i) will represent the price available to agent i for bundle L in the mechanism if it announces arrival time ai and departure time di. The price is independent of the value vi of agent i, but can depend on ai, di and L as long as it satisfies a monotonicity condition. PROOF. Agent i cannot benefit from reporting a later departure ˆdi because the allocation is made in period ˆdi and the agent would have no value for this allocation. Agent i cannot benefit from reporting a later arrival ˆai> ai or earlier departure ˆdi <di because of price monotonicity. Finally, the agent cannot benefit from reporting some ˆvi = ~ vi because its reported valuation does not change the prices it faces and the mechanism maximizes its utility given its reported valuation and given the prices. LEMMA 2. At any given time, there is at most one buyer in the system whose proxy does not hold an option for a given item type because of buyer i's presence in the system, and the identity of that buyer will be stored in i's proxy's local memory at that time if such a buyer exists. bumped proxy will leave the system having never won an option, or (b) the bumped proxy will win an auction in the future. If (a), the buyer's presence prevented exactly that one buyer from winning an option, but will have not prevented any other proxies from winning an option (as the buyer's proxy will not bid on additional options upon securing one), and will have had that bumped proxy's identity in its local memory by definition of the algorithm. If (b), the buyer has not prevented the bumped proxy from winning an option after all, but rather has prevented only the proxy that lost to the bumped proxy from winning (if any), whose identity will now be stored in the proxy's local memory by definition of the algorithm. For this new identity in the buyer's proxy's local memory, either scenario (a) or (b) will be true, ad infinitum. Given this, we show that the options-based infrastructure implements a price-based auction with a monotonic and value-independent price schedule to every agent. THEOREM 3. Truthful revelation of valuation, arrival and departure is a dominant strategy for a buyer in the optionsbased market. PROOF. First, define a simple agent-independent price function pki (t, v_i) as the highest bid by the proxies not holding an option on an item of type Gk at time t, not including the proxy representing i herself and not including any proxies that would have already won an option had i never entered the system (i.e., whose identity is stored in i's proxy's local memory) (∞ if no supply at t). This set of proxies is independent of any declaration i makes to its proxy (as the set explicitly excludes the at most one proxy (see Lemma 2) that i has prevented from holding an option), and each bid submitted by a proxy within this set is only a function of their own buyer's declared valuation (see Equation 1). Furthermore, i cannot influence the supply she faces as any options returned by bidders due to a price set by i's proxy's bid will be re-auctioned after i has departed the system. Therefore, pki (t, v_i) is independent of i's declaration to its proxy. Next, define pski (ˆai, ˆdi, v_i) = minˆa, <τ <ˆd, [pki (τ, v_i)] (possibly ∞) as the minimum price over pki (t, v_i), which is clearly monotonic. By construction of price matching, this is exactly the price obtained by a proxy on any option that it holds at departure. Define psi (ˆai, ˆdi, L, v_i) = k = K k = 1 pski (ˆai, ˆdi, v_i) Lk, which is monotonic in ˆai, ˆdi and L since pski (ˆai, ˆdi, v_i) is monotonic in ˆai and ˆdi and (weakly) greater than zero for each k. Given the set of options held at ˆdi, which may be a subset of those items with non-infinite prices, the proxy exercises options to maximize the reported utility. Left to show is that all bundles that could not be obtained with options held are priced sufficiently high as to not be preferred. For each such bundle, either there is an item priced at ∞ (in which case the bundle would not be desired) or there must be an item in that bundle for which the proxy does not hold an option that was available. In all auctions for such an item there must have been a distinct bidder with a bid greater than bidti (k), which subsequently results in pski (ˆai, ˆdi, v_i)> bidti (k), and so the bundle without k would be preferred to the bundle. THEOREM 4. The super proxy, options-based scheme is individually-rational for both buyers and sellers. Table 3: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as the simulated options-based market using worst-case estimates of bidders' true value. PROOF. By construction, the proxy exercises the profit maximizing set of options obtained, or no options if no set of options derives non-negative surplus. Therefore, buyers are guaranteed non-negative surplus by participating in the scheme. For sellers, the price of each option is based on a non-negative bid or zero. 5. EVALUATING THE OPTIONS / PROXY INFRASTRUCTURE A goal of the empirical benchmarking and a reason to collect data from eBay is to try and build a realistic model of buyers from which to estimate seller revenue and other market effects under the options-based scheme. We simulate a sequence of auctions that match the timing of the Dell LCD auctions on eBay .16 When an auction successfully closes on eBay, we simulate a Vickrey auction for an option on the item sold in that period. Auctions that do not successfully close on eBay are not simulated. We estimate the arrival, departure and value of each bidder on eBay from their observed behavior .17 Arrival is estimated as the first time that a bidder interacts with the eBay proxy, while departure is estimated as the latest closing time among eBay auctions in which a bidder participates. We initially adopt a particularly conservative estimate for bidder value, estimating it as the highest bid a bidder was observed to make on eBay. Table 3 compares the distribution of closing prices on eBay and in the simulated options scheme. While the average revenue in both schemes is virtually the same ($239.66 in the options scheme vs. $240.24 on eBay), the winners in the options scheme tend to value the item won 7% more than the winners on eBay ($263 in the options scheme vs. $244 on eBay). 5.1 Bid Identification We extend the work of Haile and Tamer [11] to sequential auctions to get a better view of underlying bidder values. Rather than assume for bidders an equilibrium behavior as in standard econometric techniques, Haile and Tamer do not attempt to model how bidders' true values get mapped into a bid in any given auction. Rather, in the context of repeated 16When running the simulations, the results of the first and final ten days of auctions are not recorded to reduce edge effects that come from viewing a discrete time window of a continuous process. 17For the 100 bidders that won multiple times on eBay, we have each one bid a constant marginal value for each additional item in each auction until the number of options held equals the total number of LCDs won on eBay, with each option available for price matching independently. This bidding strategy is not a dominant strategy (falling outside the type space possible for buyers on which the proof of truthfulness has been built), but is believed to be the most appropriate first order action for simulation. Figure 2: CDF of maximum bids observed and upper bound estimate of the bidding population's distribution for maximum willingness to pay. The true population distribution lies below the estimated upper bound. single-item auctions with distinct bidder populations, Haile and Tamer make only the following two assumptions when estimating the distribution of true bidder values: 1. Bidders do not bid more than they are willing to pay. 2. Bidders do not allow an opponent to win at a price they are willing to beat. From the first of their two assumptions, given the bids placed by each bidder in each auction, Haile and Tamer derive a method for estimating an upper bound of the bidding population's true value distribution (i.e., the bound that lies above the true value distribution). From the second of their two assumptions, given the winning price of each auction, Haile and Tamer derive a method for estimating a lower bound of the bidding population's true value distribution. It is only the upper-bound of the distribution that we utilize in our work. Haile and Tamer assume that bidders only participate in a single auction, and require independence of the bidding population from auction to auction. Neither assumption is valid here: the former because bidders are known to bid in more than one auction, and the latter because the set of bidders in an auction is in all likelihood not a true i.i.d. sampling of the overall bidding population. In particular, those who win auctions are less likely to bid in successive auctions, while those who lose auctions are more likely to remain bidders in future auctions. In applying their methods we make the following adjustments: • Within a given auction, each individual bidder's true willingness to pay is assumed weakly greater than the maximum bid that bidder submits across all auctions for that item (either past or future). • When estimating the upper bound of the value distribution, if a bidder bids in more than one auction, randomly select one of the auctions in which the bidder bid, and only utilize that one observation during the estimation .18 Table 4: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as in the simulated options-based market using an adjusted Haile and Tamer estimate of bidders' true values being 15% higher than their maximum observed bid. Figure 2 provides the distribution of maximum bids placed by bidders on eBay as well as the estimated upper bound of the true value distribution of bidders based on the extended Haile and Tamer method .19 As can be seen, the smallest relative gap between the two curves meaningfully occurs near the 80th percentile, where the upper bound is 1.17 times the maximum bid. Therefore, adopted as a less conservative model of bidder values is a uniform scaling factor of 1.15. We now present results from this less conservative analysis. Table 4 shows the distribution of closing prices in auctions on eBay and in the simulated options scheme. The mean price in the options scheme is now significantly higher, 15% greater, than the prices on eBay ($276 in the options scheme vs. $240 on eBay), while the standard deviation of closing prices is lower among the options scheme auctions ($14 in the options scheme vs. $32 on eBay). Therefore, not only is the expected revenue stream higher, but the lower variance provides sellers a greater likelihood of realizing that higher revenue. The efficiency of the options scheme remains higher than on eBay. The winners in the options scheme now have an average estimated value 7.5% higher at$302. In an effort to better understand this efficiency, we formulated a mixed integer program (MIP) to determine a simple estimate of the allocative efficiency of eBay. The MIP computes the efficient value of the offline problem with full hindsight on all bids and all supply .20 Using a scaling of 1.15, the total value allocated to eBay winners is estimated at $551,242, while the optimal value (from the MIP) is$593,301. This suggests an allocative efficiency of 92.9%: while the typical value of a winner on eBay is $281, an average value of$303 was possible .21 Note the options-based tain auctions, then further adjustments would be required in order to utilize these techniques. 19The estimation of the points in the curve is a minimization over many variables, many of which can have smallnumbers bias. Consequently, Haile and Tamer suggest using a weighted average over all terms yi of ~ i yi exp (yiρ) j exp (yj ρ) to approximate the minimum while reducing the small number effects. We used p = − 1000 and removed observations of auctions with 17 bidders or more as they occurred very infrequently. However, some small numbers bias still demonstrated itself with the plateau in our upper bound estimate around a value of $300. 20Buyers who won more than one item on eBay are cloned so that they appear to be multiple bidders of identical type. 21As long as one believes that every bidder's true value is a constant factor α away from their observed maximum bid, the 92.9% efficiency calculation holds for any value of α. In practice, this belief may not be reasonable. For example, if losing bidders tend to have true values close to their observed scheme comes very close to achieving this level of efficiency [at 99.7% efficient in this estimate] even though it operates without the benefit of hindsight. Finally, although the typical winning bidder surplus decreases between eBay and the options-based scheme, some surplus redistribution would be possible because the total market efficiency is improved .22 6. DISCUSSION The biggest concern with our scheme is that proxy agents who may be interested in many different items may acquire many more options than they finally exercise. This can lead to efficiency loss. Notice that this is not an issue when bidders are only interested in a single item (as in our empirical study), or have linear-additive values on items. To fix this, we would prefer to have proxy agents use more caution in acquiring options and use a more adaptive bidding strategy than that in Equation 1. For instance, if a proxy is already holding an option with an exercise price of$3 on some item for which it has value of $10, and it values some substitute item at$5, the proxy could reason that in no circumstance will it be useful to acquire an option on the second item. We formulate a more sophisticated bidding strategy along these lines. Let Θt be the set of all options a proxy for bidder i already possesses at time t. Let θt C _ Θt, be a subset of those options, the sum of whose exercise prices are π (θt), and the goods corresponding to those options being γ (θt). Let Π (θt) = ˆvi (γ (θt))--π (θt) be the (reported) available surplus associated with a set of options. Let θ * t be the set of options currently held that would maximize the buyer's surplus; i.e., θ * t = argmaxθtCΘt Π (θt). Let the maximal willingness to pay for an item k represent a price above which the agent knows it would never exercise an option on the item given the current options held. This can be computed as follows: (3) where ˆvi (L + k)--Π (θ * t) considers surplus already held, ˆvi (L + k)--ˆvi (L) considers the marginal value of a good, and taking the max [0,.] considers the overall use of pursuing the good. However, and somewhat counter intuitively, we are not able to implement this bidding scheme without forfeiting truthfulness. The Π (θ * t) term in Equation 3 (i.e., the amount of guaranteed surplus bidder i has already obtained) can be influenced by proxy j's bid. Therefore, bidder j may have the incentive to misrepresent her valuation to her proxy if she believes doing so will cause i to bid differently in the future in a manner beneficial to j. Consider the following example where the proxy scheme is refined to bid the maximum willingness to pay. EXAMPLE 3. Alice values either one ton of Sand or one ton of Stone for $2,000. Bob values either one ton of Sand or one ton of Stone for$1,500. All bidders have a patience maximum bids while eBay winners have true values much greater than their observed maximum bids then downward bias is introduced in the efficiency calculation at present. 22The increase in eBay winner surplus between Tables 3 and 4 is to be expected as the α scaling strictly increases the estimated value of the eBay winners while holding the prices at which they won constant. of 2 days. On day one, a Sand auction is held, where Alice's proxy bids $2,000 and Bob's bids$1,500. Alice's proxy wins an option to purchase Sand for $1,500. On day two, a Stone auction is held, where Alice's proxy bids$1,500 [as she has already obtained a guaranteed $500 of surplus from winning a Sand option, and so reduces her Stone bid by this amount], and Bob's bids$1,500. Either Alice's proxy or Bob's proxy will win the Stone option. At the end of the second day, Alice's proxy holds an option with an exercise price of $1,500 to obtain a good valued for$2,000, and so obtains $500 in surplus. Now, consider what would have happened had Alice declared that she valued only Stone. EXAMPLE 4. Alice declares valuing only Stone for$2,000. Bob values either one ton of Sand or one ton of Stone for $1,500. All bidders have a patience of 2 days. On day one, a Sand auction is held, where Bob's proxy bids$1,500. Bob's proxy wins an option to purchase Sand for $0. On day two, a Stone auction is held, where Alice's proxy bids$2,000, and Bob's bids $0 [as he has already obtained a guaranteed$1,500 of surplus from winning a Sand option, and so reduces his Stone bid by this amount]. Alice's proxy wins the Stone option for $0. At the end of the second day, Alice's proxy holds an option with an exercise price of$0 to obtain a good valued for $2,000, and so obtains$2,000 in surplus. By misrepresenting her valuation (i.e., excluding her value of Sand), Alice was able to secure higher surplus by guiding Bob's bid for Stone to $0. An area of immediate further work by the authors is to develop a more sophisticated proxy agent that can allow for bidding of maximum willingness to pay (Equation 3) while maintaining truthfulness. An additional, practical, concern with our proxy scheme is that we assume an available, trusted, and well understood method to characterize goods (and presumably the quality of goods). We envision this happening in practice by sellers defining a classification for their item upon entering the market, for instance via a UPC code. Just as in eBay, this would allow an opportunity for sellers to improve revenue by overstating the quality of their item ("new" vs. "like new"), and raises the issue of how well a reputation scheme could address this. 7. CONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods. Our scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic. In addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers. For instance, does the options scheme change seller incentives from what they currently are on eBay? The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution * ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. 1. INTRODUCTION Electronic markets represent an application of information systems that has generated significant new trading opportu * A preliminary version of this work appeared in the AMEC workshop in 2004. nities while allowing for the dynamic pricing of goods. In addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]). Many authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1]. There is still little evidence of automated trading in e-markets, though. We believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs. Without this, we do not expect individual consumers, or firms, to be confident in placing their business in the "hands" of an automated agent. One of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was$44B. Among items listed on eBay, many are essentially identical. This is especially true in the Consumer Electronics category [9], which accounted for roughly $3.5 B of eBay's gross merchandise volume in 2005. This presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem. For example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction. While Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the "wrong" auction. This is a problem of multiple copies. Another problem bidders may face is the exposure problem. As investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions .1 For example, if Alice values a video game console by itself for$200, a video game by itself for $30, and both a console and game for$250, Alice must determine how much of the $20 of synergy value she might include in her bid for the console alone. Both problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations. Why might the sequential auction problem be bad? Complex games may lead to bidders employing costly strategies and making mistakes. Potential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions. The problem is also a familiar one of online decision making. market, inhibiting seller revenue opportunities. Additionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities. We are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties. 1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems. Return policies alleviate the exposure problem by allowing customers to return goods at the purchase price. Price matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18]. Furthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item. These two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay. Our "super" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies. The extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions. A seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option. Buyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good (s). The proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions. A proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation. All other options are returned to the market and not exercised. The options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics. We conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005. LCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem. We first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period. This model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market. We also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments. Using this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices. However, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids. from the maximum bid they were observed to have placed on eBay. Based on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient. 1.2 Related Work A number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20]. These papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions. Peters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium. However, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders. Previous work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2]. Previous work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction. Iwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem. In other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24]. Most similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem. However, their work uses costly options and does not remove the sequential bidding problem completely. Work on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time. We leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol. The special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation. Jiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions. 2. EBAY AND THE DELL E193FP 3. MODELING THE SEQUENTIAL AUCTION PROBLEM 4. "SUPER" PROXIES AND OPTIONS 4.1 Buyer Proxies 4.1.1 Acquiring Options 4.1.2 Pricing Options 4.1.3 Exercising Options 4.1.4 Why bookkeep and not match winning price? 4.2 Complexity of Algorithm 4.3 Truthful Bidding to the Proxy Agent 5. EVALUATING THE OPTIONS / PROXY INFRASTRUCTURE 5.1 Bid Identification 6. DISCUSSION 7. CONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods. Our scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic. In addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers. For instance, does the options scheme change seller incentives from what they currently are on eBay? The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution * ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. 1. INTRODUCTION nities while allowing for the dynamic pricing of goods. In addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]). We believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs. Among items listed on eBay, many are essentially identical. This presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem. For example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction. This is a problem of multiple copies. Another problem bidders may face is the exposure problem. Both problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations. Why might the sequential auction problem be bad? Complex games may lead to bidders employing costly strategies and making mistakes. Potential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions. The problem is also a familiar one of online decision making. market, inhibiting seller revenue opportunities. Additionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities. We are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties. 1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems. Return policies alleviate the exposure problem by allowing customers to return goods at the purchase price. Price matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18]. Furthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item. These two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay. Our "super" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies. The extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions. A seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option. The proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions. A proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation. All other options are returned to the market and not exercised. The options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics. We conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005. LCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem. We first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period. This model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market. We also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments. Using this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices. However, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids. from the maximum bid they were observed to have placed on eBay. Based on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient. 1.2 Related Work These papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions. Peters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium. However, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders. Previous work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2]. Previous work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction. Iwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem. In other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24]. Most similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem. However, their work uses costly options and does not remove the sequential bidding problem completely. Work on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time. We leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol. The special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation. Jiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions. 7. CONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods. Our scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic. In addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers. For instance, does the options scheme change seller incentives from what they currently are on eBay? I-54 Approximate and Online Multi-Issue Negotiation This paper analyzes bilateral multi-issue negotiation between self-interested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m > 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are indivisible (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O(nm/ 2 ) where n is the negotiation deadline and the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( √ m) . These approximate strategies also have polynomial time complexity. [ "approxim", "negoti", "time constraint", "equilibrium", "strategi", "rel error", "interact kei form", "multiag system", "disput agent", "gain from cooper", "protocol", "indivis issu", "game-theori", "onlin comput" ] [ "P", "P", "P", "P", "P", "P", "M", "U", "M", "U", "U", "R", "U", "R" ] Approximate and Online Multi-Issue Negotiation Shaheen S. Fatima Department of Computer Science University of Liverpool Liverpool L69 3BX, UK. shaheen@csc.liv.ac.uk Michael Wooldridge Department of Computer Science University of Liverpool Liverpool L69 3BX, UK. mjw@csc.liv.ac.uk Nicholas R. Jennings School of Electronics and Computer Science University of Southampton Southampton SO17 1BJ, UK. nrj@ecs.soton.ac.uk ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m > 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are indivisible (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O(nm/ 2 ) where n is the negotiation deadline and the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( √ m) . These approximate strategies also have polynomial time complexity. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Algorithms, Design, Theory 1. INTRODUCTION Negotiation is a key form of interaction in multiagent systems. It is a process in which disputing agents decide how to divide the gains from cooperation. Since this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them. Now, the simplest form of negotiation involves two agents and a single issue. For example, consider a scenario in which a buyer and a seller negotiate on the price of a good. To begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement. Since agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome. However, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers. That is, they must set the negotiation protocol [20]. On the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation). Given this context, this work focuses on competitive scenarios with self-interested agents. For such cases, each participant defines its strategy so as to maximise its individual utility. However, in most bilateral negotiations, the parties involved need to settle more than one issue. For this case, the issues may be divisible or indivisible [4]. For the former, the problem for the agents is to decide how to split each issue between themselves [21]. For the latter, the individual issues cannot be divided. An issue, in its entirety, must be allocated to either of the two agents. Since the agents value different issues differently, they must come to terms about who will take which issue. To date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6]. However, in many real-world settings, the issues are indivisible. Hence, our focus here is on negotiation for indivisible issues. Such negotiations are very common in multiagent systems. For example, consider the case of task allocation between two agents. There is a set of tasks to be carried out and different agents have different preferences for the tasks. The tasks cannot be partitioned; a task must be carried out by one agent. The problem then is for the agents to negotiate about who will carry out which task. A key problem in the study of multi-issue negotiation is to determine the equilibrium strategies. An equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers. However, such computational issues have so far received little attention. As we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues 951 978-81-904262-7-5 (RPS) c 2007 IFAAMAS and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues. Our primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts. Thus, we consider negotiations in which there is incomplete information and time constraints. Incompleteness of information on the part of negotiators is a common feature of most practical negotiations. Also, agents typically have time constraints in the form of both deadlines and discount factors. Deadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit. Likewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation. Moreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues). Given this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents. For this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting. Then, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O(nm/ 2 ) where n is the negotiation deadline and the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( √ m) . These approximate strategies also have polynomial time complexity. In so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria. No previous work has determined these equilibria. Since software agents have limited computational resources, our results are especially relevant to such resource bounded agents. The remainder of the paper is organised as follows. We begin by giving a brief overview of single-issue negotiation in Section 2. In Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem. We then present an approximate equilibrium and evaluate its approximation error. Section 4 analyzes online multi-issue negotiation. Section 5 discusses the related literature and Section 6 concludes. 2. SINGLE-ISSUE NEGOTIATION We adopt the single issue model of [27] because this is a model where, during negotiation, the parties are allowed to make offers from a set of discrete offers. Since our focus is on indivisible issues (i.e., parties are allowed to make one of two possible offers: zero or one), our scenario fits in well with [27]. Hence we use this basic single issue model and extend it to multiple issues. Before doing so, we give an overview of this model and its equilibrium strategies. There are two strategic agents: a and b. Each agent has time constraints in the form of deadlines and discount factors. The two agents negotiate over a single indivisible issue (i). This issue is a pie'' of size 1 and the agents want to determine who gets the pie. There is a deadline (i.e., a number of rounds by which negotiation must end). Let n ∈ N+ denote this deadline. The agents use an alternating offers protocol (as the one of Rubinstein [18]), which proceeds through a series of time periods. One of the agents, say a, starts negotiation in the first time period (i.e., t = 1) by making an offer (xi = 0 or 1) to b. Agent b can either accept or reject the offer. If it accepts, negotiation ends in an agreement with a getting xi and b getting yi = 1 − xi. Otherwise, negotiation proceeds to the next time period, in which agent b makes a counter-offer. This process of making offers continues until one of the agents either accepts an offer or quits negotiation (resulting in a conflict). Thus, there are three possible actions an agent can take during any time period: accept the last offer, make a new counter-offer, or quit the negotiation. An essential feature of negotiations involving alternating offers is that the agents'' utilities decrease with time [21]. Specifically, the decrease occurs at each step of offer and counteroffer. This decrease is represented with a discount factor denoted 0 < δi ≤ 1 for both1 agents. Let [xt i, yt i ] denote the offer made at time period t where xt i and yt i denote the share for agent a and b respectively. Then, for a given pie, the set of possible offers is: {[xt i, yt i ] : xt i = 0 or 1, yt i = 0 or 1, and xt i + yt i = 1} At time t, if a and b receive a share of xt i and yt i respectively, then their utilities are: ua i (xt i, t) = j xt i × δt−1 if t ≤ n 0 otherwise ub i (yt i , t) = j yt i × δt−1 if t ≤ n 0 otherwise The conflict utility (i.e., the utility received in the event that no deal is struck) is zero for both agents. For the above setting, the agents reason as follows in order to determine what to offer at t = 1. We let A(1) (B(1)) denote as (bs) equilibrium offer for the first time period. Let agent a denote the first mover (i.e., at t = 1, a proposes to b who should get the pie). To begin, consider the case where the deadline for both agents is n = 1. If b accepts, the division occurs as agreed; if not, neither agent gets anything (since n = 1 is the deadline). Here, a is in a powerful position and is able to propose to keep 100 percent of the pie and give nothing to b 2 . Since the deadline is n = 1, b accepts this offer and agreement takes place in the first time period. Now, consider the case where the deadline is n = 2. In order to decide what to offer in the first round, a looks ahead to t = 2 and reasons backwards. Agent a reasons that if negotiation proceeds to the second round, b will take 100 percent of the pie by offering [0, 1] and leave nothing for a. Thus, in the first time period, if a offers b anything less than the whole pie, b will reject the offer. Hence, during the first time period, agent a offers [0, 1]. Agent b accepts this and an agreement occurs in the first time period. In general, if the deadline is n, negotiation proceeds as follows. As before, agent a decides what to offer in the first round by looking ahead as far as t = n and then reasoning backwards. Agent as 1 Having a different discount factor for different agents only makes the presentation more involved without leading to any changes in the analysis of the strategic behaviour of the agents or the time complexity of finding the equilibrium offers. Hence we have a single discount factor for both agents. 2 It is possible that b may reject such a proposal. However, irrespective of whether b accepts or rejects the proposal, it gets zero utility (because the deadline is n = 1). Thus, we assume that b accepts as offer. 952 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) offer for t = 1 depends on who the offering agent is for the last time period. This, in turn, depends on whether n is odd or even. Since a makes an offer at t = 1 and the agents use the alternating offers protocol, the offering agent for the last time period is b if n is even and it is a if n is odd. Thus, depending on whether n is odd or even, a makes the following offer at t = 1: A(1) = j OFFER [1, 0] IF ODD n ACCEPT IF bs TURN B(1) = j OFFER [0, 1] IF EVEN n ACCEPT IF as TURN Agent b accepts this offer and negotiation ends in the first time period. Note that the equilibrium outcome depends on who makes the first move. Since we have two agents and either of them could move first, we get two possible equilibrium outcomes. On the basis of the above equilibrium for single-issue negotiation with complete information, we first obtain the equilibrium for multiple issues and then show that computing these offers is a hard problem. We then present a time efficient approximate equilibrium. 3. MULTI-ISSUE NEGOTIATION We first analyse the complete information setting. This section forms the base which we extend to the case of information uncertainty in Section 4. Here a and b negotiate over m > 1 indivisible issues. These issues are m distinct pies and the agents want to determine how to distribute the pies between themselves. Let S = {1, 2, ... , m} denote the set of m pies. As before, each pie is of size 1. Let the discount factor for issue c, where 1 ≤ c ≤ m, be 0 < δc ≤ 1. For each issue, let n denote each agents deadline. In the offer for time period t (where 1 ≤ t ≤ n), agent as (bs) share for each of the m issues is now represented as an m element vector xt ∈ Bm (yt ∈ Bm ) where B denotes the set {0, 1}. Thus, if agent as share for issue c at time t is xt c, then agent bs share is yt c = (1−xt c). The shares for a and b are together represented as the package [xt , yt ]. As is traditional in multi-issue utility theory, we define an agents cumulative utility using the standard additive form [12]. The functions Ua : Bm × Bm × N+ → R and Ub : Bm × Bm × N+ → R give the cumulative utilities for a and b respectively at time t. These are defined as follows: Ua ([xt , yt ], t) = ( Σm c=1ka c ua c (xt c, t) if t ≤ n 0 otherwise (1) Ub ([xt , yt ], t) = ( Σm c=1kb cub c(yt c, t) if t ≤ n 0 otherwise (2) where ka ∈ Nm + denotes an m element vector of constants for agent a and kb ∈ Nm + that for b. Here N+ denotes the set of positive integers. These vectors indicate how the agents value different issues. For example, if ka c > ka c+1, then agent a values issue c more than issue c + 1. Likewise for agent b. In other words, the m issues are perfect substitutes (i.e., all that matters to an agent is its total utility for all the m issues and not that for any subset of them). In all the settings we study, the issues will be perfect substitutes. To begin each agent has complete information about all negotiation parameters (i.e., n, m, ka c , kb c, and δc for 1 ≤ c ≤ m). Now, multi-issue negotiation can be done using different procedures. Broadly speaking, there are three key procedures for negotiating multiple issues [19]: 1. the package deal procedure where all the issues are settled together as a bundle, 2. the sequential procedure where the issues are discussed one after another, and 3. the simultaneous procedure where the issues are discussed in parallel. Between these three procedures, the package deal is known to generate Pareto optimal outcomes [19, 6]. Hence we adopt it here. We first give a brief description of the procedure and then determine the equilibrium strategies for it. 3.1 The package deal procedure In this procedure, the agents use the same protocol as for singleissue negotiation (described in Section 2). However, an offer for the package deal includes a proposal for each issue under negotiation. Thus, for m issues, an offer includes m divisions, one for each issue. Agents are allowed to either accept a complete offer (i.e., all m issues) or reject a complete offer. An agreement can therefore take place either on all m issues or on none of them. As per the single-issue negotiation, an agent decides what to offer by looking ahead and reasoning backwards. However, since an offer for the package deal includes a share for all the m issues, the agents can now make tradeoffs across the issues in order to maximise their cumulative utilities. For 1 ≤ c ≤ m, the equilibrium offer for issue c at time t is denoted as [at c, bt c] where at c and bt c denote the shares for agent a and b respectively. We denote the equilibrium package at time t as [at , bt ] where at ∈ Bm (bt ∈ Bm ) is an m element vector that denotes as (bs) share for each of the m issues. Also, for 1 ≤ c ≤ m, δc is the discount factor for issue c. The symbols 0 and 1 denote m element vectors of zeroes and ones respectively. Note that for 1 ≤ t ≤ n, at c + bt c = 1 (i.e., the sum of the agents'' shares (at time t) for each pie is one). Finally, for time period t (for 1 ≤ t ≤ n) we let A(t) (respectively B(t)) denote the equilibrium strategy for agent a (respectively b). 3.2 Equilibrium strategies As mentioned in Section 1, the package deal allows agents to make tradeoffs. We let TRADEOFFA (TRADEOFFB) denote agent as (bs) function for making tradeoffs. We let P denote a set of parameters to the procedure TRADEOFFA (TRADEOFFB) where P = {ka , kb , δ, m}. Given this, the following theorem characterises the equilibrium for the package deal procedure. THEOREM 1. For the package deal procedure, the following strategies form a Nash equilibrium. The equilibrium strategy for t = n is: A(n) = j OFFER [1, 0] IF as TURN ACCEPT IF bs TURN B(n) = j OFFER [0, 1] IF bs TURN ACCEPT IF as TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: A(t) = 8 < : OFFER TRADEOFFA(P, UB(t), t) IF as TURN If (Ua ([xt , yt ], t) ≥ UA(t)) ACCEPT else REJECT IF bs TURN B(t) = 8 < : OFFER TRADEOFFB(P, UA(t), t) IF bs TURN If (Ub ([xt , yt ], t) ≥ UB(t)) ACCEPT else REJECT IF as TURN The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953 where UA(t) = Ua ([at+1 , bt+1 ], t + 1) and UB(t) = Ub ([at+1 , bt+1 ], t + 1). PROOF. We look ahead to the last time period (i.e., t = n) and then reason backwards. To begin, if negotiation reaches the deadline (n), then the agent whose turn it is takes everything and leaves nothing for its opponent. Hence, we get the strategies A(n) and B(n) as given in the statement of the theorem. In all the preceding time periods (t < n), the offering agent proposes a package that gives its opponent a cumulative utility equal to what the opponent would get from its own equilibrium offer for the next time period. During time period t, either a or b could be the offering agent. Consider the case where a makes an offer at t. The package that a offers at t gives b a cumulative utility of Ub ([at+1 , bt+1 ], t + 1). However, since there is more than one issue, there is more than one package that gives b this cumulative utility. From among these packages, a offers the one that maximises its own cumulative utility (because it is a utility maximiser). Thus, the problem for a is to find the package [at , bt ] so as to: maximize mX c=1 ka c (1 − bt c)δt−1 c (3) such that mX c=1 bt ckb c ≥ UB(t) bt c = 0 or 1 for 1 ≤ c ≤ m where UB(t), δt−1 c , ka c , and kb c are constants and bt c (1 ≤ c ≤ m) is a variable. Assume that the function TRADEOFFA takes parameters P, UB(t), and t, to solve the maximisation problem given in Equation 3 and returns the corresponding package. If there is more than one package that solves Equation 3, then TRADEOFFA returns any one of them (because agent a gets equal utility from all such packages and so does agent b). The function TRADEOFFB for agent b is analogous to that for a. On the other hand, the equilibrium strategy for the agent that receives an offer is as follows. For time period t, let b denote the receiving agent. Then, b accepts [xt , yt ] if UB(t) ≤ Ub ([xt , yt ], t), otherwise it rejects the offer because it can get a higher utility in the next time period. The equilibrium strategy for a as receiving agent is defined analogously. In this way, we reason backwards and obtain the offers for the first time period. Thus, we get the equilibrium strategies (A(t) and B(t)) given in the statement of the theorem. The following example illustrates how the agents make tradeoffs using the above equilibrium strategies. EXAMPLE 1. Assume there are m = 2 issues for negotiation, the deadline for both issues is n = 2, and the discount factor for both issues for both agents is δ = 1/2. Let ka 1 = 3, ka 2 = 1, kb 1 = 1, and kb 2 = 5. Let agent a be the first mover. By using backward reasoning, a knows that if negotiation reaches the second time period (which is the deadline), then b will get a hundred percent of both the issues. This gives b a cumulative utility of UB(2) = 1/2 + 5/2 = 3. Thus, in the first time period, if b gets anything less than a utility of 3, it will reject as offer. So, at t = 1, a offers the package where it gets issue 1 and b gets issue 2. This gives a cumulative utility of 3 to a and 5 to b. Agent b accepts the package and an agreement takes place in the first time period. The maximization problem in Equation 3 can be viewed as the 0-1 knapsack problem3 . In the 0-1 knapsack problem, we have a set 3 Note that for the case of divisible issues this is the fractional knapof m items where each item has a profit and a weight. There is a knapsack with a given capacity. The objective is to fill the knapsack with items so as to maximize the cumulative profit of the items in the knapsack. This problem is analogous to the negotiation problem we want to solve (i.e., the maximization problem of Equation 3). Since ka c and δt−1 c are constants, maximizing Pm c=1 ka c (1−bt c)δt−1 c is the same as minimizing Pm c=1 ka c bt c. Hence Equation 3 can be written as: minimize mX c=1 ka c bt c (4) such that mX c=1 bt ckb c ≥ UB(t) bt c = 0 or 1 for 1 ≤ c ≤ m Equation 4 is a minimization version of the standard 0-1 knapsack problem4 with m items where ka c represents the profit for item c, kb c the weight for item c, and UB(t) the knapsack capacity. Example 1 was for two issues and so it was easy to find the equilibrium offers. But, in general, it is not computationally easy to find the equilibrium offers of Theorem 1. The following theorem proves this. THEOREM 2. For the package deal procedure, the problem of finding the equilibrium offers given in Theorem 1 is NP-hard. PROOF. Finding the equilibrium offers given in Theorem 1 requires solving the 0-1 knapsack problem given in Equation 4. Since the 0-1 knapsack problem is NP-hard [17], the problem of finding equilibrium for the package deal is also NP-hard. 3.3 Approximate equilibrium Researchers in the area of algorithms have found time efficient methods for computing approximate solutions to 0-1 knapsack problems [10]. Hence we use these methods to find a solution to our negotiation problem. At this stage, we would like to point out the main difference between solving the 0-1 knapsack problem and solving our negotiation problem. The 0-1 knapsack problem involves decision making by a single agent regarding which items to place in the knapsack. On the other hand, our negotiation problem involves two players and they are both strategic. Hence, in our case, it is not enough to just find an approximate solution to the knapsack problem, we must also show that such an approximation forms an equilibrium. The traditional approach for overcoming the computational complexity in finding an equilibrium has been to use an approximate equilibrium (see [14, 26] for example). In this approach, a strategy profile is said to form an approximate Nash equilibrium if neither agent can gain more than the constant by deviating. Hence, our aim is to use the solution to the 0-1 knapsack problem proposed in [10] and show that it forms an approximate equilibrium to our negotiation problem. Before doing so, we give a brief overview of the key ideas that underlie approximation algorithms. There are two key issues in the design of approximate algorithms [1]: sack problem. The factional knapsack problem is computationally easy; it can be solved in time polynomial in the number of items in the knapsack problem [17]. In contrast, the 0-1 knapsack problem is computationally hard. 4 Note that for the standard 0-1 knapsack problem the weights, profits and the capacity are positive integers. However a 0-1 knapsack problem with fractions and non positive values can easily be transformed to one with positive integers in time linear in m using the methods given in [8, 17]. 954 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1. the quality of their solution, and 2. the time taken to compute the approximation. The quality of an approximate algorithm is determined by comparing its performance to that of the optimal algorithm and measuring the relative error [3, 1]. The relative error is defined as (z−z∗ )/z∗ where z is the approximate solution and z∗ the optimal one. In general, we are interested in finding approximate algorithms whose relative error is bounded from above by a certain constant , i.e., (z − z∗ )/z∗ ≤ (5) Regarding the second issue of time complexity, we are interested in finding fully polynomial approximation algorithms. An approximation algorithm is said to be fully polynomial if for any > 0 it finds a solution satisfying Equation 5 in time polynomially bounded by size of the problem (for the 0-1 knapsack problem, the problem size is equal to the number of items) and by 1/ [1]. For the 0-1 knapsack problem, Ibarra and Kim [10] presented a fully polynomial approximation method. This method is based on dynamic programming. It is a parametric method that takes as a parameter and for any > 0, finds a heuristic solution z with relative error at most , such that the time and space complexity grow polynomially with the number of items m and 1/ . More specifically, the space and time complexity are both O(m/ 2 ) and hence polynomial in m and 1/ (see [10] for the detailed approximation algorithm and proof of time and space complexity). Since the Ibarra and Kim method is fully polynomial, we use it to solve our negotiation problem. This is done as follows. For agent a, let APRX-TRADEOFFA(P, UB(t), t, ) denote a procedure that returns an approximate solution to Equation 4 using the Ibarra and Kim method. The procedure APRX-TRADEOFFB(P, UA(t), t, ) for agent b is analogous. For 1 ≤ c ≤ m, the approximate equilibrium offer for issue c at time t is denoted as [¯at c,¯bt c] where ¯at c and ¯bt c denote the shares for agent a and b respectively. We denote the equilibrium package at time t as [¯at ,¯bt ] where ¯at ∈ Bm (¯bt ∈ Bm ) is an m element vector that denotes as (bs) share for each of the m issues. Also, as before, for 1 ≤ c ≤ m, δc is the discount factor for issue c. Note that for 1 ≤ t ≤ n, ¯at c + ¯bt c = 1 (i.e., the sum of the agents'' shares (at time t) for each pie is one). Finally, for time period t (for 1 ≤ t ≤ n) we let ¯A(t) (respectively ¯B(t)) denote the approximate equilibrium strategy for agent a (respectively b). The following theorem uses this notation and characterizes an approximate equilibrium for multi-issue negotiation. THEOREM 3. For the package deal procedure, the following strategies form an approximate Nash equilibrium. The equilibrium strategy for t = n is: ¯A(n) = j OFFER [1, 0] IF as TURN ACCEPT IF bs TURN ¯B(n) = j OFFER [0, 1] IF bs TURN ACCEPT IF as TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: ¯A(t) = 8 < : OFFER APRX-TRADEOFFA(P, UB(t), t, ) IF as TURN If (Ua ([xt , yt ], t) ≥ UA(t)) ACCEPT else REJECT IF bs TURN ¯B(t) = 8 < : OFFER APRX-TRADEOFFB(P, UA(t), t, ) IF bs TURN If (Ub ([xt , yt ], t) ≥ UB(t)) ACCEPT else REJECT IF as TURN where UA(t) = Ua ([¯at+1 ,¯bt+1 ], t + 1) and UB(t) = Ub ([¯at+1 , ¯bt+1 ], t + 1). An agreement takes place at t = 1. PROOF. As in the proof for Theorem 1, we use backward reasoning. We first obtain the strategies for the last time period t = n. It is straightforward to get these strategies; the offering agent gets a hundred percent of all the issues. Then for t = n − 1, the offering agent must solve the maximization problem of Equation 4 by substituting t = n−1 in it. For agent a (b), this is done by APPROX-TRADEOFFA (APPROX-TRADEOFFB). These two functions are nothing but the Ibarra and Kims approximation method for solving the 0-1 knapsack problem. These two functions take as a parameter and use the Ibarra and Kims approximation method to return a package that approximately maximizes Equation 4. Thus, the relative error for these two functions is the same as that for Ibarra and Kims method (i.e., it is at most where is given in Equation 5). Assume that a is the offering agent for t = n − 1. Agent a must offer a package that gives b a cumulative utility equal to what it would get from its own approximate equilibrium offer for the next time period (i.e., Ub ([¯at+1 ,¯bt+1 ], t + 1) where [¯at+1 ,¯bt+1 ] is the approximate equilibrium package for the next time period). Recall that for the last time period, the offering agent gets a hundred percent of all the issues. Since a is the offering agent for t = n − 1 and the agents use the alternating offers protocol, it is bs turn at t = n. Thus Ub ([¯at+1 ,¯bt+1 ], t + 1) is equal to bs cumulative utility from receiving a hundred percent of all the issues. Using this utility as the capacity of the knapsack, a uses APPROX-TRADEOFFA and obtains the approximate equilibrium package for t = n − 1. On the other hand, if b is the offering agent at t = n − 1, it uses APPROX-TRADEOFFB to obtain the approximate equilibrium package. In the same way for t < n − 1, the offering agent (say a) uses APPROX-TRADEOFFA to find an approximate equilibrium package that gives b a utility of Ub ([¯at+1 ,¯bt+1 ], t + 1). By reasoning backwards, we obtain the offer for time period t = 1. If a (b) is the offering agent, it proposes the offer APPROX-TRADEOFFA(P, UB(1), 1, ) (APPROX-TRADEOFFB(P, UA(1), 1, )). The receiving agent accepts the offer. This is because the relative error in its cumulative utility from the offer is at most . An agreement therefore takes place in the first time period. THEOREM 4. The time complexity of finding the approximate equilibrium offer for the first time period is O(nm/ 2 ). PROOF. The time complexity of APPROX-TRADEOFFA and APPROXTRADEOFFB is the same as the time complexity of the Ibarra and Kim method [10] i.e., O(m/ 2 )). In order to find the equilibrium offer for the first time period using backward reasoning, APPROXTRADEOFFA (or APPROX- TRADEOFFB) is invoked n times. Hence the time complexity of finding the approximate equilibrium offer for the first time period is O(nm/ 2 ). This analysis was done in a complete information setting. However an extension of this analysis to an incomplete information setting where the agents have probability distributions over some uncertain parameter is straightforward, as long as the negotiation is done offline; i.e., the agents know their preference for each individual issue before negotiation begins. For instance, consider the case where different agents have different discount factors, and each agent is uncertain about its opponents discount factor although it knows its own. This uncertainty is modelled with a probability distribution over the possible values for the opponents discount factor and having this distribution as common knowledge to the agents. All our analysis for the complete information setting still holds for The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955 this incomplete information setting, except for the fact that an agent must now use the given probability distribution and find its opponents expected utility instead of its actual utility. Hence, instead of analyzing an incomplete information setting for offline negotiation, we focus on online multi-issue negotiation. 4. ONLINE MULTI-ISSUE NEGOTIATION We now consider a more general and, arguably more realistic, version of multi-issue negotiation, where the agents are uncertain about the issues they will have to negotiate about in future. In this setting, when negotiating an issue, the agents know that they will negotiate more issues in the future, but they are uncertain about the details of those issues. As before, let m be the total number of issues that are up for negotiation. The agents have a probability distribution over the possible values of ka c and kb c. For 1 ≤ c ≤ m let ka c and kb c be uniformly distributed over [0,1]. This probability distribution, n, and m are common knowledge to the agents. However, the agents come to know ka c and kb c only just before negotiation for issue c begins. Once the agents reach an agreement on issue c, it cannot be re-negotiated. This scenario requires online negotiation since the agents must make decisions about an issue prior to having the information about the future issues [3]. We first give a brief introduction to online problems and then draw an analogy between the online knapsack problem and the negotiation problem we want to solve. In an online problem, data is given to the algorithm incrementally, one unit at a time [3]. The online algorithm must also produce the output incrementally: after seeing i units of input it must output the ith unit of output. Since decisions about the output are made with incomplete knowledge about the entire input, an online algorithm often cannot produce an optimal solution. Such an algorithm can only approximate the performance of the optimal algorithm that sees all the inputs in advance. In the design of online algorithms, the main aim is to achieve a performance that is close to that of the optimal offline algorithm on each input. An online algorithm is said to be stochastic if it makes decisions on the basis of the probability distributions for the future inputs. The performance of stochastic online algorithms is assessed in terms of the expected difference between the optimum and the approximate solution (denoted E[z∗ m −zm] where z∗ m is the optimal and zm the approximate solution). Note that the subscript m is used to indicate the fact that this difference depends on m. We now describe the protocol for online negotiation and then obtain an approximate equilibrium. The protocol is defined as follows. Let agent a denote the first mover (since we focus on the package deal procedure, the first mover is the same for all the m issues). Step 1. For c = 1, the agents are given the values of ka c and kb c. These two values are now common5 knowledge. Step 2. The agents settle issue c using the alternating offers protocol described in Section 2. Negotiation for issue c must end within n time periods from the start of negotiation on the issue. If an agreement is not reached within this time, then negotiation fails on this and on all remaining issues. Step 3. The above steps are repeated for issues c = 2, 3, ... , m. Negotiation for issue c (2 ≤ c ≤ m) begins in the time period following an agreement on issue c − 1. 5 We assume common knowledge because it simplifies exposition. However, if ka c (kb c) is as (bs) private knowledge, then our analysis will still hold but now an agent must find its opponents expected utility on the basis of the p.d.fs for ka c and kb c. Thus, during time period t, the problem for the offering agent (say a) is to find the optimal offer for issue c on the basis of ka c and kb c and the probability distribution for ka i and kb i (c < i ≤ m). In order to solve this online negotiation problem we draw analogy with the online knapsack problem. Before doing so, however, we give a brief overview of the online knapsack problem. In the online knapsack problem, there are m items. The agent must examine the m items one at a time according to the order they are input (i.e., as their profit and size coefficients become known). Hence, the algorithm is required to decide whether or not to include each item in the knapsack as soon as its weight and profit become known, without knowledge concerning the items still to be seen, except for their total number. Note that since the agents have a probability distribution over the weights and profits of the future items, this is a case of stochastic online knapsack problem. Our online negotiation problem is analogous to the online knapsack problem. This analogy is described in detail in the proof for Theorem 5. Again, researchers in algorithms have developed time efficient approximate solutions to the online knapsack problem [16]. Hence we use this solution and show that it forms an equilibrium. The following theorem characterizes an approximate equilibrium for online negotiation. Here the agents have to choose a strategy without knowing the features of the future issues. Because of this information incompleteness, the relevant equilibrium solution is that of a Bayes'' Nash Equilibrium (BNE) in which each agent plays the best response to the other agents with respect to their expected utilities [18]. However, finding an agents BNE strategy is analogous to solving the online 0-1 knapsack problem. Also, the online knapsack can only be solved approximately [16]. Hence the relevant equilibrium solution concept is approximate BNE (see [26] for example). The following theorem finds this equilibrium using procedures ONLINE- TRADEOFFA and ONLINE-TRADEOFFB which are defined in the proof of the theorem. For a given time period, we let zm denote the approximately optimal solution generated by ONLINE-TRADEOFFA (or ONLINE-TRADEOFFB) and z∗ m the actual optimum. THEOREM 5. For the package deal procedure, the following strategies form an approximate Bayes'' Nash equilibrium. The equilibrium strategy for t = n is: A(n) = j OFFER [1, 0] IF as TURN ACCEPT IF bs TURN B(n) = j OFFER [0, 1] IF bs TURN ACCEPT IF as TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: A(t) = 8 < : OFFER ONLINE-TRADEOFFA(P, UB(t), t) IF as TURN If (Ua ([xt , yt ], t) ≥ UA(t)) ACCEPT else REJECT IF bs TURN B(t) = 8 < : OFFER ONLINE-TRADEOFFB(P, UA(t), t) IF bs TURN If (Ub ([xt , yt ], t) ≥ UB(t)) ACCEPT else REJECT IF as TURN where UA(t) = Ua ([¯at+1 ,¯bt+1 ], t + 1) and UB(t) = Ub ([¯at+1 , ¯bt+1 ], t + 1). An agreement on issue c takes place at t = c. For a given time period, the expected difference between the solution generated by the optimal strategy and that by the approximate strategy is E[z∗ m − zm] = O( √ m). 956 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) PROOF. As in Theorem 1 we find the equilibrium offer for time period t = 1 using backward induction. Let a be the offering agent for t = 1 for all the m issues. Consider the last time period t = n (recall from Step 2 of the online protocol that n is the deadline for completing negotiation on the first issue). Since the first mover is the same for all the issues, and the agents make offers alternately, the offering agent for t = n is also the same for all the m issues. Assume that b is the offering agent for t = n. As in Section 3, the offering agent for t = n gets a hundred percent of all the m issues. Since b is the offering agent for t = n, his utility for this time period is: UB(n) = kb 1δn−1 1 + 1/2 mX i=2 δ i(n−1) i (6) Recall that ka i and kb i (for c < i ≤ m) are not known to the agents. Hence, the agents can only find their expected utilities from the future issues on the basis of the probability distribution functions for ka i and kb i . However, during the negotiation for issue c the agents know ka c but not kb c (see Step 1 of the online protocol). Hence, a computes UB(n) as follows. Agent bs utility from issue c = 1 is kb 1δn−1 1 (which is the first term of Equation 6). Then, on the basis of the probability distribution functions for ka i and kb i , agent a computes bs expected utility from each future issue i as δ i(n−1) i /2 (since ka i and kb i are uniformly distributed on [0, 1]). Thus, bs expected cumulative utility from these m − c issues is 1/2 Pm i=2 δ i(n−1) i (which is the second term of Equation 6). Now, in order to decide what to offer for issue c = 1, the offering agent for t = n − 1 (i.e., agent a) must solve the following online knapsack problem: maximize Σm i=1ka i (1 − ¯bt i)δn−1 i (7) such that Σm i=1kb i ¯bt i ≥ UB(n) ¯bt i = 0 or 1 for 1 ≤ i ≤ m The only variables in the above maximization problem are ¯bt i. Now, maximizing Σm i=1ka i (1−¯bt i)δn−1 i is the same as minimizing Σm i=1ka i ¯bt i since δn−1 i and ka i are constants. Thus, we write Equation 7 as: minimize Σm i=1ka i ¯bt i (8) such that Σm i=1kb i ¯bt i ≥ UB(n) ¯bt i = 0 or 1 for 1 ≤ i ≤ m The above optimization problem is analogous to the online 0-1 knapsack problem. An algorithm to solve the online knapsack problem has already proposed in [16]. This algorithm is called the fixed-choice online algorithm. It has time complexity linear in the number of items (m) in the knapsack problem. We use this to solve our online negotiation problem. Thus, our ONLINE-TRADEOFFA algorithm is nothing but the fixed-choice online algorithm and therefore has the same time complexity as the latter. This algorithm takes the values of ka i and kb i one at a time and generates an approximate solution to the above knapsack problem. The expected difference between the optimum and approximate solution is E[z∗ m − zm] = O( √ m) [16] (see [16] for the detailed fixed-choice online algorithm and a proof for E[z∗ m − zm] = O( √ m)). The fixed-choice online algorithm of [16] is a generalization of the basic greedy algorithm for the offline knapsack problem; the idea behind it is as follows. A threshold value is determined on the basis of the information regarding weights and profits for the 0-1 knapsack problem. The method then includes into the knapsack all items whose profit density (profit density of an item is its profit per unit weight) exceeds the threshold until either the knapsack is filled or all the m items have been considered. In more detail, the algorithm ONLINE-TRADEOFFA works as follows. It first gets the values of ka 1 and kb 1 and finds ¯bt c. Since we have a 0-1 knapsack problem, ¯bt c can be either zero or one. Now, if ¯bt c = 1 for t = n, then ¯bt c must be one for 1 ≤ t < n (i.e., a must offer ¯bt c = 1 at t = 1). If ¯bt c = 1 for t = n, but a offers ¯bt c = 0 at t = 1, then agent b gets less utility than what it expects from as offer and rejects the proposal. Thus, if ¯bt c = 1 for t = n, then the optimal strategy for a is to offer ¯bt c = 1 at t = 1. Agent b accepts the offer. Thus, negotiation on the first issue starts at t = 1 and an agreement on it is also reached at t = 1. In the next time period (i.e., t = 2), negotiation proceeds to the next issue. The deadline for the second issue is n time periods from the start of negotiation on the issue. For c = 2, the algorithm ONLINE-TRADEOFFA is given the values of ka 2 and kb 2 and finds ¯bt c as described above. Agent offers bc at t = 2 and b accepts. Thus, negotiation on the second issue starts at t = 2 and an agreement on it is also reached at t = 2. This process repeats for the remaining issues c = 3, ... , m. Thus, each issue is agreed upon in the same time period in which it starts. As negotiation for the next issue starts in the following time period (see step 3 of the online protocol), agreement on issue i occurs at time t = i. On the other hand, if b is the offering agent at t = 1, he uses the algorithm ONLINE-TRADEOFFB which is defined analogously. Thus, irrespective of who makes the first move, all the m issues are settled at time t = m. THEOREM 6. The time complexity of finding the approximate equilibrium offers of Theorem 5 is linear in m. PROOF. The time complexity of ONLINE-TRADEOFFA and ONLINETRADEOFFB is the same as the time complexity of the fixed-choice online algorithm of [16]. Since the latter has time complexity linear in m, the time complexity of ONLINE-TRADEOFFA and ONLINETRADEOFFB is also linear in m. It is worth noting that, for the 0-1 knapsack problem, the lower bound on the expected difference between the optimum and the solution found by any online algorithm is Ω(1) [16]. Thus, it follows that this lower bound also holds for our negotiation problem. 5. RELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues. We first describe the existing work for the case of divisible issues. Since Schelling [24] first noted that the outcome of negotiation depends on the choice of negotiation procedure, much research effort has been devoted to the study of different procedures for negotiating multiple issues. However, most of this work has focussed on the sequential procedure [7, 2]. For this procedure, a key issue is the negotiation agenda. Here the term agenda refers to the order in which the issues are negotiated. The agenda is important because each agents cumulative utility depends on the agenda; if we change the agenda then these utilities change. Hence, the agents must decide what agenda they will use. Now, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous). For instance, Fershtman [7] analyze sequential negotiation with exogenous agenda. A number of researchers have also studied negotiations with an endogenous agenda [2]. In contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure. However, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957 where each issue is divisible. Specifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation. Existing work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents. The problem of task allocation has been previously studied in the context of coalitions involving more than two agents. For example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole. In contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities. On the other hand [22] focus on the use of contracts for task allocation to multiple self interested agents but this work concerns finding ways of decommitting contracts (after the initial allocation has been done) so as to improve an agents utility. In contrast, our focuses on negotiation regarding who will carry out which task. Finally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work). 6. CONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints. The issues are indivisible and different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. Specifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting. We then presented approximately optimal negotiation strategies and showed that they form an equilibrium. These strategies have polynomial time complexity. We also analysed the difference between the true optimum and the approximate optimum. Finally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues. Specifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error. These approximate strategies also have polynomial time complexity. There are several interesting directions for future work. First, for online negotiation, we assumed that the constants ka c and kb c are both uniformly distributed. It will be interesting to analyze the case where ka c and kb c have other, possibly different, probability distributions. Apart from this, we treated the number of issues as being common knowledge to the agents. In future, it will be interesting to treat the number of issues as uncertain. 7. REFERENCES [1] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, and M. Protasi. Complexity and approximation: Combinatorial optimization problems and their approximability properties. Springer, 2003. [2] M. Bac and H. Raff. Issue-by-issue negotiations: the role of information and time preference. Games and Economic Behavior, 13:125-134, 1996. [3] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 1998. [4] S. J. Brams. Fair division: from cake cutting to dispute resolution. Cambridge University Press, 1996. [5] L. A. Busch and I. J. Horstman. Bargaining frictions, bargaining procedures and implied costs in multiple-issue bargaining. Economica, 64:669-680, 1997. [6] S. S. Fatima, M. Wooldridge, and N. R. Jennings. Multi-issue negotiation with deadlines. Journal of Artificial Intelligence Research, 27:381-417, 2006. [7] C. Fershtman. The importance of the agenda in bargaining. Games and Economic Behavior, 2:224-238, 1990. [8] F. Glover. A multiphase dual algorithm for the zero-one integer programming problem. Operations Research, 13:879-919, 1965. [9] M. T. Hajiaghayi, R. Kleinberg, and D. C. Parkes. Adaptive limited-supply online auctions. In ACM Conference on Electronic Commerce (ACMEC-04), pages 71-80, New York, 2004. [10] O. H. Ibarra and C. E. Kim. Fast approximation algorithms for the knapsack and sum of subset problems. Journal of ACM, 22:463-468, 1975. [11] R. Inderst. Multi-issue bargaining with endogenous agenda. Games and Economic Behavior, 30:64-82, 2000. [12] R. Keeney and H. Raiffa. Decisions with Multiple Objectives: Preferences and Value Trade-offs. New York: John Wiley, 1976. [13] S. Kraus. Strategic negotiation in multi-agent environments. The MIT Press, Cambridge, Massachusetts, 2001. [14] D. Lehman, L. I. OCallaghan, and Y. Shoham. Truth revelation in approximately efficient combinatorial auctions. Journal of the ACM, 49(5):577-602, 2002. [15] A. Lomuscio, M. Wooldridge, and N. R. Jennings. A classification scheme for negotiation in electronic commerce. International Journal of Group Decision and Negotiation, 12(1):31-56, 2003. [16] A. Marchetti-Spaccamela and C. Vercellis. Stochastic online knapsack problems. Mathematical Programming, 68:73-104, 1995. [17] S. Martello and P. Toth. Knapsack problems: Algorithms and computer implementations. John Wiley and Sons, 1990. [18] M. J. Osborne and A. Rubinstein. A Course in Game Theory. The MIT Press, 1994. [19] H. Raiffa. The Art and Science of Negotiation. Harvard University Press, Cambridge, USA, 1982. [20] J. S. Rosenschein and G. Zlotkin. Rules of Encounter. MIT Press, 1994. [21] A. Rubinstein. Perfect equilibrium in a bargaining model. Econometrica, 50(1):97-109, January 1982. [22] T. Sandholm and V. Lesser. Levelled commitment contracts and strategic breach. Games and Economic Behavior: Special Issue on AI and Economics, 35:212-270, 2001. [23] T. Sandholm and N. Vulkan. Bargaining with deadlines. In AAAI-99, pages 44-51, Orlando, FL, 1999. [24] T. C. Schelling. An essay on bargaining. American Economic Review, 46:281-306, 1956. [25] O. Shehory and S. Kraus. Methods for task allocation via agent coalition formation. Artificial Intelligence Journal, 101(1-2):165-200, 1998. [26] S. Singh, V. Soni, and M. Wellman. Computing approximate Bayes Nash equilibria in tree games of incomplete information. In Proceedings of the ACM Conference on Electronic Commerce ACM-EC, pages 81-90, New York, May 2004. [27] I. Stahl. Bargaining Theory. Economics Research Institute, Stockholm School of Economics, Stockholm, 1972. 958 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Approximate and Online Multi-Issue Negotiation ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m> 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are "indivisible" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (√ m). These approximate strategies also have polynomial time complexity. 1. INTRODUCTION Negotiation is a key form of interaction in multiagent systems. It is a process in which disputing agents decide how to divide the gains from cooperation. Since this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them. Now, the simplest form of negotiation involves two agents and a single issue. For example, consider a scenario in which a buyer and a seller negotiate on the price of a good. To begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement. Since agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome. However, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers. That is, they must set the negotiation protocol [20]. On the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation). Given this context, this work focuses on competitive scenarios with self-interested agents. For such cases, each participant defines its strategy so as to maximise its individual utility. However, in most bilateral negotiations, the parties involved need to settle more than one issue. For this case, the issues may be divisible or indivisible [4]. For the former, the problem for the agents is to decide how to split each issue between themselves [21]. For the latter, the individual issues cannot be divided. An issue, in its entirety, must be allocated to either of the two agents. Since the agents value different issues differently, they must come to terms about who will take which issue. To date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6]. However, in many real-world settings, the issues are indivisible. Hence, our focus here is on negotiation for indivisible issues. Such negotiations are very common in multiagent systems. For example, consider the case of task allocation between two agents. There is a set of tasks to be carried out and different agents have different preferences for the tasks. The tasks cannot be partitioned; a task must be carried out by one agent. The problem then is for the agents to negotiate about who will carry out which task. A key problem in the study of multi-issue negotiation is to determine the equilibrium strategies. An equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers. However, such computational issues have so far received little attention. As we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues. Our primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts. Thus, we consider negotiations in which there is incomplete information and time constraints. Incompleteness of information on the part of negotiators is a common feature of most practical negotiations. Also, agents typically have time constraints in the form of both deadlines and discount factors. Deadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit. Likewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation. Moreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues). Given this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents. For this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting. Then, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is 0 (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- / m). These approximate strategies also have polynomial time complexity. In so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria. No previous work has determined these equilibria. Since software agents have limited computational resources, our results are especially relevant to such resource bounded agents. The remainder of the paper is organised as follows. We begin by giving a brief overview of single-issue negotiation in Section 2. In Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem. We then present an approximate equilibrium and evaluate its approximation error. Section 4 analyzes online multi-issue negotiation. Section 5 discusses the related literature and Section 6 concludes. 2. SINGLE-ISSUE NEGOTIATION We adopt the single issue model of [27] because this is a model where, during negotiation, the parties are allowed to make offers from a set of discrete offers. Since our focus is on indivisible issues (i.e., parties are allowed to make one of two possible offers: zero or one), our scenario fits in well with [27]. Hence we use this basic single issue model and extend it to multiple issues. Before doing so, we give an overview of this model and its equilibrium strategies. There are two strategic agents: a and b. Each agent has time constraints in the form of deadlines and discount factors. The two agents negotiate over a single indivisible issue (i). This issue is a  pie' of size 1 and the agents want to determine who gets the pie. There is a deadline (i.e., a number of rounds by which negotiation must end). Let n E N + denote this deadline. The agents use an alternating offers protocol (as the one of Rubinstein [18]), which proceeds through a series of time periods. One of the agents, say a, starts negotiation in the first time period (i.e., t = 1) by making an offer (xi = 0 or 1) to b. Agent b can either accept or reject the offer. If it accepts, negotiation ends in an agreement with a getting xi and b getting yi = 1--xi. Otherwise, negotiation proceeds to the next time period, in which agent b makes a counter-offer. This process of making offers continues until one of the agents either accepts an offer or quits negotiation (resulting in a conflict). Thus, there are three possible actions an agent can take during any time period: accept the last offer, make a new counter-offer, or quit the negotiation. An essential feature of negotiations involving alternating offers is that the agents' utilities decrease with time [21]. Specifically, the decrease occurs at each step of offer and counteroffer. This decrease is represented with a discount factor denoted 0 <δi <1 for both1 agents. Let [xt i, yti] denote the offer made at time period t where xti and yti denote the share for agent a and b respectively. Then, for a given pie, the set of possible offers is: The conflict utility (i.e., the utility received in the event that no deal is struck) is zero for both agents. For the above setting, the agents reason as follows in order to determine what to offer at t = 1. We let A (1) (B (1)) denote a's (b's) equilibrium offer for the first time period. Let agent a denote the first mover (i.e., at t = 1, a proposes to b who should get the pie). To begin, consider the case where the deadline for both agents is n = 1. If b accepts, the division occurs as agreed; if not, neither agent gets anything (since n = 1 is the deadline). Here, a is in a powerful position and is able to propose to keep 100 percent of the pie and give nothing to b 2. Since the deadline is n = 1, b accepts this offer and agreement takes place in the first time period. Now, consider the case where the deadline is n = 2. In order to decide what to offer in the first round, a looks ahead to t = 2 and reasons backwards. Agent a reasons that if negotiation proceeds to the second round, b will take 100 percent of the pie by offering [0, 1] and leave nothing for a. Thus, in the first time period, if a offers b anything less than the whole pie, b will reject the offer. Hence, during the first time period, agent a offers [0, 1]. Agent b accepts this and an agreement occurs in the first time period. In general, if the deadline is n, negotiation proceeds as follows. As before, agent a decides what to offer in the first round by looking ahead as far as t = n and then reasoning backwards. Agent a's 1Having a different discount factor for different agents only makes the presentation more involved without leading to any changes in the analysis of the strategic behaviour of the agents or the time complexity of finding the equilibrium offers. Hence we have a single discount factor for both agents. 2It is possible that b may reject such a proposal. However, irrespective of whether b accepts or rejects the proposal, it gets zero utility (because the deadline is n = 1). Thus, we assume that b accepts a's offer. ubi 952 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) offer for t = 1 depends on who the offering agent is for the last time period. This, in turn, depends on whether n is odd or even. Since a makes an offer at t = 1 and the agents use the alternating offers protocol, the offering agent for the last time period is b if n is even and it is a if n is odd. Thus, depending on whether n is odd or even, a makes the following offer at t = 1: Agent b accepts this offer and negotiation ends in the first time period. Note that the equilibrium outcome depends on who makes the first move. Since we have two agents and either of them could move first, we get two possible equilibrium outcomes. On the basis of the above equilibrium for single-issue negotiation with complete information, we first obtain the equilibrium for multiple issues and then show that computing these offers is a hard problem. We then present a time efficient approximate equilibrium. 3. MULTI-ISSUE NEGOTIATION We first analyse the complete information setting. This section forms the base which we extend to the case of information uncertainty in Section 4. Here a and b negotiate over m> 1 indivisible issues. These issues are m distinct pies and the agents want to determine how to distribute the pies between themselves. Let S = {1, 2,..., mI denote the set of m pies. As before, each pie is of size 1. Let the discount factor for issue c, where 1 <c <m, be 0 <δc <1. For each issue, let n denote each agent's deadline. In the offer for time period t (where 1 <t <n), agent a's (b's) share for each of the m issues is now represented as an m element vector xt E Bm (yt E Bm) where B denotes the set {0, 1I. Thus, if agent a's share for issue c at time t is xt c, then agent b's share is ytc = (1--xtc). The shares for a and b are together represented as the package [xt, yt]. As is traditional in multi-issue utility theory, we define an agent's cumulative utility using the standard additive form [12]. The functions Ua: Bm x Bm x N +, R and Ub: Bm x Bm x N +, R give the cumulative utilities for a and b respectively at time t. These are defined as follows: where ka E Nm + denotes an m element vector of constants for agent a and kb E Nm + that for b. Here N + denotes the set of positive integers. These vectors indicate how the agents value different issues. For example, if kac> kac +1, then agent a values issue c more than issue c + 1. Likewise for agent b. In other words, the m issues are perfect substitutes (i.e., all that matters to an agent is its total utility for all the m issues and not that for any subset of them). In all the settings we study, the issues will be perfect substitutes. To begin each agent has complete information about all negotiation parameters (i.e., n, m, kac, kb c, and δc for 1 <c <m). Now, multi-issue negotiation can be done using different procedures. Broadly speaking, there are three key procedures for negotiating multiple issues [19]: 1. the package deal procedure where all the issues are settled together as a bundle, 2. the sequential procedure where the issues are discussed one after another, and 3. the simultaneous procedure where the issues are discussed in parallel. Between these three procedures, the package deal is known to generate Pareto optimal outcomes [19, 6]. Hence we adopt it here. We first give a brief description of the procedure and then determine the equilibrium strategies for it. 3.1 The package deal procedure In this procedure, the agents use the same protocol as for singleissue negotiation (described in Section 2). However, an offer for the package deal includes a proposal for each issue under negotiation. Thus, for m issues, an offer includes m divisions, one for each issue. Agents are allowed to either accept a complete offer (i.e., all m issues) or reject a complete offer. An agreement can therefore take place either on all m issues or on none of them. As per the single-issue negotiation, an agent decides what to offer by looking ahead and reasoning backwards. However, since an offer for the package deal includes a share for all the m issues, the agents can now make tradeoffs across the issues in order to maximise their cumulative utilities. For 1 <c <m, the equilibrium offer for issue c at time t is denoted as [atc, btc] where atc and btc denote the shares for agent a and b respectively. We denote the equilibrium package at time t as [at, bt] where at E Bm (bt E Bm) is an m element vector that denotes a's (b's) share for each of the m issues. Also, for 1 <c <m, δc is the discount factor for issue c. The symbols 0 and 1 denote m element vectors of zeroes and ones respectively. Note that for 1 <t <n, atc + btc = 1 (i.e., the sum of the agents' shares (at time t) for each pie is one). Finally, for time period t (for 1 <t <n) we let A (t) (respectively B (t)) denote the equilibrium strategy for agent a (respectively b). 3.2 Equilibrium strategies As mentioned in Section 1, the package deal allows agents to make tradeoffs. We let TRADEOFFA (TRADEOFFB) denote agent a's (b's) function for making tradeoffs. We let P denote a set of parameters to the procedure TRADEOFFA (TRADEOFFB) where P = {ka, kb, δ, mI. Given this, the following theorem characterises the equilibrium for the package deal procedure. For all preceding time periods t <n, if [xt, yt] denotes the offer made at time t, then the equilibrium strategies are defined as follows: The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953 where UA (t) = Ua ([at +1, bt +1], t + 1) and UB (t) = Ub ([at +1, bt +1], t + 1). PROOF. We look ahead to the last time period (i.e., t = n) and then reason backwards. To begin, if negotiation reaches the deadline (n), then the agent whose turn it is takes everything and leaves nothing for its opponent. Hence, we get the strategies A (n) and B (n) as given in the statement of the theorem. In all the preceding time periods (t <n), the offering agent proposes a package that gives its opponent a cumulative utility equal to what the opponent would get from its own equilibrium offer for the next time period. During time period t, either a or b could be the offering agent. Consider the case where a makes an offer at t. The package that a offers at t gives b a cumulative utility of Ub ([at +1, bt +1], t + 1). However, since there is more than one issue, there is more than one package that gives b this cumulative utility. From among these packages, a offers the one that maximises its own cumulative utility (because it is a utility maximiser). Thus, the problem for a is to find the package [at, bt] so as to: where UB (t), δt − 1c, kac, and kbc are constants and btc (1 <c <m) is a variable. Assume that the function TRADEOFFA takes parameters P, UB (t), and t, to solve the maximisation problem given in Equation 3 and returns the corresponding package. If there is more than one package that solves Equation 3, then TRADEOFFA returns any one of them (because agent a gets equal utility from all such packages and so does agent b). The function TRADEOFFB for agent b is analogous to that for a. On the other hand, the equilibrium strategy for the agent that receives an offer is as follows. For time period t, let b denote the receiving agent. Then, b accepts [xt, yt] if UB (t) <Ub ([xt, yt], t), otherwise it rejects the offer because it can get a higher utility in the next time period. The equilibrium strategy for a as receiving agent is defined analogously. In this way, we reason backwards and obtain the offers for the first time period. Thus, we get the equilibrium strategies (A (t) and B (t)) given in the statement of the theorem. The following example illustrates how the agents make tradeoffs using the above equilibrium strategies. EXAMPLE 1. Assume there are m = 2 issues for negotiation, the deadline for both issues is n = 2, and the discount factor for both issues for both agents is δ = 1/2. Let ka1 = 3, ka2 = 1, kb1 = 1, and kb2 = 5. Let agent a be the first mover. By using backward reasoning, a knows that if negotiation reaches the second time period (which is the deadline), then b will get a hundred percent of both the issues. This gives b a cumulative utility of UB (2) = 1/2 + 5/2 = 3. Thus, in the first time period, if b gets anything less than a utility of 3, it will reject a's offer. So, at t = 1, a offers the package where it gets issue 1 and b gets issue 2. This gives a cumulative utility of 3 to a and 5 to b. Agent b accepts the package and an agreement takes place in the first time period. The maximization problem in Equation 3 can be viewed as the 0-1 knapsack problem3. In the 0-1 knapsack problem, we have a set 3Note that for the case of divisible issues this is the fractional knapof m items where each item has a profit and a weight. There is a knapsack with a given capacity. The objective is to fill the knapsack with items so as to maximize the cumulative profit of the items in the knapsack. This problem is analogous to the negotiation problem we want to solve (i.e., the maximization problem of Equation 3). Since kc a and δt − 1 c are constants, maximizing Pmc = 1 kac (1--bt c) δt − 1 Equation 4 is a minimization version of the standard 0-1 knapsack problem4 with m items where kac represents the profit for item c, kbc the weight for item c, and UB (t) the knapsack capacity. Example 1 was for two issues and so it was easy to find the equilibrium offers. But, in general, it is not computationally easy to find the equilibrium offers of Theorem 1. The following theorem proves this. THEOREM 2. For the package deal procedure, the problem of finding the equilibrium offers given in Theorem 1 is NP-hard. PROOF. Finding the equilibrium offers given in Theorem 1 requires solving the 0-1 knapsack problem given in Equation 4. Since the 0-1 knapsack problem is NP-hard [17], the problem offinding equilibrium for the package deal is also NP-hard. 3.3 Approximate equilibrium Researchers in the area of algorithms have found time efficient methods for computing approximate solutions to 0-1 knapsack problems [10]. Hence we use these methods to find a solution to our negotiation problem. At this stage, we would like to point out the main difference between solving the 0-1 knapsack problem and solving our negotiation problem. The 0-1 knapsack problem involves decision making by a single agent regarding which items to place in the knapsack. On the other hand, our negotiation problem involves two players and they are both strategic. Hence, in our case, it is not enough to just find an approximate solution to the knapsack problem, we must also show that such an approximation forms an equilibrium. The traditional approach for overcoming the computational complexity in finding an equilibrium has been to use an approximate equilibrium (see [14, 26] for example). In this approach, a strategy profile is said to form an e approximate Nash equilibrium if neither agent can gain more than the constant a by deviating. Hence, our aim is to use the solution to the 0-1 knapsack problem proposed in [10] and show that it forms an approximate equilibrium to our negotiation problem. Before doing so, we give a brief overview of the key ideas that underlie approximation algorithms. There are two key issues in the design of approximate algorithms [1]: sack problem. The factional knapsack problem is computationally easy; it can be solved in time polynomial in the number of items in the knapsack problem [17]. In contrast, the 0-1 knapsack problem is computationally hard. 4Note that for the standard 0-1 knapsack problem the weights, profits and the capacity are positive integers. However a 0-1 knapsack problem with fractions and non positive values can easily be transformed to one with positive integers in time linear in m using the methods given in [8, 17]. 954 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1. the quality of their solution, and 2. the time taken to compute the approximation. The quality of an approximate algorithm is determined by comparing its performance to that of the optimal algorithm and measuring the relative error [3, 1]. The relative error is defined as (z − z ∗) / z ∗ where z is the approximate solution and z ∗ the optimal one. In general, we are interested in finding approximate algorithms whose relative error is bounded from above by a certain constant ~, i.e., Regarding the second issue of time complexity, we are interested in finding fully polynomial approximation algorithms. An approximation algorithm is said to be fully polynomial if for any ~> 0 it finds a solution satisfying Equation 5 in time polynomially bounded by size of the problem (for the 0-1 knapsack problem, the problem size is equal to the number of items) and by 1 / ~ [1]. For the 0-1 knapsack problem, Ibarra and Kim [10] presented a fully polynomial approximation method. This method is based on dynamic programming. It is a parametric method that takes ~ as a parameter and for any ~> 0, finds a heuristic solution z with relative error at most ~, such that the time and space complexity grow polynomially with the number of items m and 1 / ~. More specifically, the space and time complexity are both O (m / ~ 2) and hence polynomial in m and 1 / ~ (see [10] for the detailed approximation algorithm and proof of time and space complexity). Since the Ibarra and Kim method is fully polynomial, we use it to solve our negotiation problem. This is done as follows. For agent a, let APRX-TRADEOFFA (P, UB (t), t, ~) denote a procedure that returns an ~ approximate solution to Equation 4 using the Ibarra and Kim method. The procedure APRX-TRADEOFFB (P, UA (t), t, ~) for agent b is analogous. For 1 ≤ c ≤ m, the approximate equilibrium offer for issue c at time t is denoted as [¯ atc, ¯ btc] where ¯ atc and ¯ btc denote the shares for agent a and b respectively. We denote the equilibrium package at time t as [¯ at, ¯ bt] where ¯ at ∈ Bm (¯ bt ∈ Bm) is an m element vector that denotes a's (b's) share for each of the m issues. Also, as before, for 1 ≤ c ≤ m, δc is the discount factor for issue c. Note that for 1 ≤ t ≤ n, ¯ atc + ¯ btc = 1 (i.e., the sum of the agents' shares (at time t) for each pie is one). Finally, for time period t (for 1 ≤ t ≤ n) we let ¯ A (t) (respectively ¯ B (t)) denote the approximate equilibrium strategy for agent a (respectively b). The following theorem uses this notation and characterizes an approximate equilibrium for multi-issue negotiation. THEOREM 3. For the package deal procedure, the following strategies form an ~ approximate Nash equilibrium. The equilibrium strategy for t = n is: For all preceding time periods t <n, if [xt, yt] denotes the offer made at time t, then the equilibrium strategies are defined as follows: where UA (t) = Ua ([¯ at +1, ¯ bt +1], t + 1) and UB (t) = Ub ([¯ at +1, ¯ bt +1], t + 1). An agreement takes place at t = 1. PROOF. As in the prooffor Theorem 1, we use backward reasoning. We first obtain the strategies for the last time period t = n. It is straightforward to get these strategies; the offering agent gets a hundred percent of all the issues. Then for t = n − 1, the offering agent must solve the maximization problem of Equation 4 by substituting t = n − 1 in it. For agent a (b), this is done by APPROX-TRADEOFFA (APPROX-TRADEOFFB). These two functions are nothing but the Ibarra and Kim's approximation method for solving the 0-1 knapsack problem. These two functions take ~ as a parameter and use the Ibarra and Kim's approximation method to return a package that approximately maximizes Equation 4. Thus, the relative error for these two functions is the same as that for Ibarra and Kim's method (i.e., it is at most ~ where ~ is given in Equation 5). Assume that a is the offering agent for t = n − 1. Agent a must offer a package that gives b a cumulative utility equal to what it would get from its own approximate equilibrium offer for the next time period (i.e., Ub ([¯ at +1, ¯ bt +1], t + 1) where [¯ at +1, ¯ bt +1] is the approximate equilibrium package for the next time period). Recall that for the last time period, the offering agent gets a hundred percent of all the issues. Since a is the offering agent for t = n − 1 and the agents use the alternating offers protocol, it is b's turn at t = n. Thus Ub ([¯ at +1, ¯ bt +1], t + 1) is equal to b's cumulative utility from receiving a hundred percent of all the issues. Using this utility as the capacity of the knapsack, a uses APPROX-TRADEOFFA and obtains the approximate equilibrium package for t = n − 1. On the other hand, if b is the offering agent at t = n − 1, it uses APPROX-TRADEOFFB to obtain the approximate equilibrium package. In the same way for t <n − 1, the offering agent (say a) uses APPROX-TRADEOFFA to find an approximate equilibrium package that gives b a utility of Ub ([¯ at +1, ¯ bt +1], t + 1). By reasoning backwards, we obtain the offerfor time period t = 1. If a (b) is the offering agent, it proposes the offer APPROX-TRADEOFFA (P, UB (1), 1, ~) (APPROX-TRADEOFFB (P, UA (1), 1, ~)). The receiving agent accepts the offer. This is because the relative error in its cumulative utility from the offer is at most ~. An agreement therefore takes place in the first time period. THEOREM 4. The time complexity of finding the ~ approximate equilibrium offer for the first time period is O (nm / ~ 2). PROOF. The time complexity of APPROX-TRADEOFFA and APPROXTRADEOFFB is the same as the time complexity of the Ibarra and Kim method [10] i.e., O (m / ~ 2)). In order to find the equilibrium offer for the first time period using backward reasoning, APPROXTRADEOFFA (or APPROX - TRADEOFFB) is invoked n times. Hence the time complexity offinding the ~ approximate equilibrium offer for the first time period is O (nm / ~ 2). This analysis was done in a complete information setting. However an extension of this analysis to an incomplete information setting where the agents have probability distributions over some uncertain parameter is straightforward, as long as the negotiation is done offline; i.e., the agents know their preference for each individual issue before negotiation begins. For instance, consider the case where different agents have different discount factors, and each agent is uncertain about its opponent's discount factor although it knows its own. This uncertainty is modelled with a probability distribution over the possible values for the opponent's discount factor and having this distribution as common knowledge to the agents. All our analysis for the complete information setting still holds for The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955 this incomplete information setting, except for the fact that an agent must now use the given probability distribution and find its opponent's expected utility instead of its actual utility. Hence, instead of analyzing an incomplete information setting for offline negotiation, we focus on online multi-issue negotiation. 4. ONLINE MULTI-ISSUE NEGOTIATION We now consider a more general and, arguably more realistic, version of multi-issue negotiation, where the agents are uncertain about the issues they will have to negotiate about in future. In this setting, when negotiating an issue, the agents know that they will negotiate more issues in the future, but they are uncertain about the details of those issues. As before, let m be the total number of issues that are up for negotiation. The agents have a probability distribution over the possible values of kac and kbc. For 1 <c <m let kac and kbc be uniformly distributed over [0,1]. This probability distribution, n, and m are common knowledge to the agents. However, the agents come to know kac and kbc only just before negotiation for issue c begins. Once the agents reach an agreement on issue c, it cannot be re-negotiated. This scenario requires online negotiation since the agents must make decisions about an issue prior to having the information about the future issues [3]. We first give a brief introduction to online problems and then draw an analogy between the online knapsack problem and the negotiation problem we want to solve. In an online problem, data is given to the algorithm incrementally, one unit at a time [3]. The online algorithm must also produce the output incrementally: after seeing i units of input it must output the ith unit of output. Since decisions about the output are made with incomplete knowledge about the entire input, an online algorithm often cannot produce an optimal solution. Such an algorithm can only approximate the performance of the optimal algorithm that sees all the inputs in advance. In the design of online algorithms, the main aim is to achieve a performance that is close to that of the optimal offline algorithm on each input. An online algorithm is said to be stochastic if it makes decisions on the basis of the probability distributions for the future inputs. The performance of stochastic online algorithms is assessed in terms of the expected difference between the optimum and the approximate solution (denoted E [z ∗ m--zm] where z ∗ m is the optimal and zm the approximate solution). Note that the subscript m is used to indicate the fact that this difference depends on m. We now describe the protocol for online negotiation and then obtain an approximate equilibrium. The protocol is defined as follows. Let agent a denote the first mover (since we focus on the package deal procedure, the first mover is the same for all the m issues). Step 1. For c = 1, the agents are given the values of kac and kbc. These two values are now common5 knowledge. Step 2. The agents settle issue c using the alternating offers protocol described in Section 2. Negotiation for issue c must end within n time periods from the start of negotiation on the issue. If an agreement is not reached within this time, then negotiation fails on this and on all remaining issues. Step 3. The above steps are repeated for issues c = 2, 3,..., m. Negotiation for issue c (2 <c <m) begins in the time period following an agreement on issue c--1. 5We assume common knowledge because it simplifies exposition. However, if kac (kbc) is a's (b's) private knowledge, then our analysis will still hold but now an agent must find its opponent's expected utility on the basis of the p.d.fs for kac and kbc. Thus, during time period t, the problem for the offering agent (say a) is to find the optimal offer for issue c on the basis of kac and kbc and the probability distribution for kai and kbi (c <i <m). In order to solve this online negotiation problem we draw analogy with the online knapsack problem. Before doing so, however, we give a brief overview of the online knapsack problem. In the online knapsack problem, there are m items. The agent must examine the m items one at a time according to the order they are input (i.e., as their profit and size coefficients become known). Hence, the algorithm is required to decide whether or not to include each item in the knapsack as soon as its weight and profit become known, without knowledge concerning the items still to be seen, except for their total number. Note that since the agents have a probability distribution over the weights and profits of the future items, this is a case of stochastic online knapsack problem. Our online negotiation problem is analogous to the online knapsack problem. This analogy is described in detail in the proof for Theorem 5. Again, researchers in algorithms have developed time efficient approximate solutions to the online knapsack problem [16]. Hence we use this solution and show that it forms an equilibrium. The following theorem characterizes an approximate equilibrium for online negotiation. Here the agents have to choose a strategy without knowing the features of the future issues. Because of this information incompleteness, the relevant equilibrium solution is that of a Bayes' Nash Equilibrium (BNE) in which each agent plays the best response to the other agents with respect to their expected utilities [18]. However, finding an agent's BNE strategy is analogous to solving the online 0-1 knapsack problem. Also, the online knapsack can only be solved approximately [16]. Hence the relevant equilibrium solution concept is approximate BNE (see [26] for example). The following theorem finds this equilibrium using procedures ONLINE - TRADEOFFA and ONLINE-TRADEOFFB which are defined in the proof of the theorem. For a given time period, we let zm denote the approximately optimal solution generated by ONLINE-TRADEOFFA (or ONLINE-TRADEOFFB) and z ∗ m the actual optimum. THEOREM 5. For the package deal procedure, the following strategies form an approximate Bayes' Nash equilibrium. The equilibrium strategy for t = n is: r  l OFFER [1, 0] IF a's TURN For all preceding time periods t <n, if [xt, yt] denotes the offer made at time t, then the equilibrium strategies are defined as follows: where UA (t) = Ua ([¯ at +1, ¯ bt +1], t + 1) and UB (t) = Ub ([¯ at +1, ¯ bt +1], t + 1). An agreement on issue c takes place at t = c. For a given time period, the expected difference between the solution generated by the optimal strategy and that by the approximate strategy is E [z ∗ m--zm] = 0 (, / m). 956 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) PROOF. As in Theorem 1 we find the equilibrium offer for time period t = 1 using backward induction. Let a be the offering agent for t = 1 for all the m issues. Consider the last time period t = n (recall from Step 2 of the online protocol that n is the deadline for completing negotiation on the first issue). Since the first mover is the same for all the issues, and the agents make offers alternately, the offering agent for t = n is also the same for all the m issues. Assume that b is the offering agent for t = n. As in Section 3, the offering agent for t = n gets a hundred percent of all the m issues. Since b is the offering agent for t = n, his utility for this time period is: Recall that kai and kbi (for c <i <m) are not known to the agents. Hence, the agents can only find their expected utilities from the future issues on the basis of the probability distribution functions for kai and kbi. However, during the negotiation for issue c the agents know kac but not kbc (see Step 1 of the online protocol). Hence, a computes UB (n) as follows. Agent b's utility from issue c = 1 is kb1δn-1 1 (which is the first term of Equation 6). Then, on the basis of the probability distribution functions for kai and kbi, agent a computes b's expected utility from each future issue i as δi (n-1) i / 2 (since kai and kbi are uniformly distributed on [0, 1]). Thus, b's expected cumulative utility from these m--c issues is Now, in order to decide what to offer for issue c = 1, the offering agent fort = n--1 (i.e., agent a) must solve the following online knapsack problem: The only variables in the above maximization problem are ¯ bt i. Now, maximizing Σm i = 1ka i (1--¯ bt i) δn-1 i is the same as minimizing Σm i = 1kai ¯ bti since δn-1 i and kai are constants. Thus, we write Equation 7 as: minimize Σm i = 1ka i ¯ bt (8) i such that Σm i = 1kb i ¯ bt i> UB (n) The above optimization problem is analogous to the online 0-1 knapsack problem. An algorithm to solve the online knapsack problem has already proposed in [16]. This algorithm is called the fixed-choice online algorithm. It has time complexity linear in the number of items (m) in the knapsack problem. We use this to solve our online negotiation problem. Thus, our ONLINE-TRADEOFFA algorithm is nothing but the fixed-choice online algorithm and therefore has the same time complexity as the latter. This algorithm takes the values of kai and kbi one at a time and generates an approximate solution to the above knapsack problem. The expected difference between the optimum and approximate solution is E [z * rithm and a prooffor E [zm *--zm] = 0 (, / m)). The fixed-choice online algorithm of [16] is a generalization of the basic greedy algorithm for the offline knapsack problem; the idea behind it is as follows. A threshold value is determined on the basis of the information regarding weights and profits for the 0-1 knapsack problem. The method then includes into the knapsack all items whose profit density (profit density of an item is its profit per unit weight) exceeds the threshold until either the knapsack is filled or all the m items have been considered. In more detail, the algorithm ONLINE-TRADEOFFA works as follows. It first gets the values of ka1 and kb1 and finds ¯ btc. Since we have a 0-1 knapsack problem, ¯ btc can be either zero or one. Now, if ¯ btc = 1 for t = n, then ¯ btc must be one for 1 <t <n (i.e., a must offer ¯ btc = 1 at t = 1). If ¯ btc = 1 for t = n, but a offers ¯ btc = 0 at t = 1, then agent b gets less utility than what it expects from a's offer and rejects the proposal. Thus, if ¯ btc = 1 for t = n, then the optimal strategy for a is to offer ¯ btc = 1 at t = 1. Agent b accepts the offer. Thus, negotiation on the first issue starts at t = 1 and an agreement on it is also reached at t = 1. In the next time period (i.e., t = 2), negotiation proceeds to the next issue. The deadline for the second issue is n time periods from the start of negotiation on the issue. For c = 2, the algorithm ONLINE-TRADEOFFA is given the values of ka2 and kb 2 and finds ¯ btc as described above. Agent offers bc at t = 2 and b accepts. Thus, negotiation on the second issue starts at t = 2 and an agreement on it is also reached at t = 2. This process repeats for the remaining issues c = 3,..., m. Thus, each issue is agreed upon in the same time period in which it starts. As negotiation for the next issue starts in the following time period (see step 3 of the online protocol), agreement on issue i occurs at time t = i. On the other hand, if b is the offering agent at t = 1, he uses the algorithm ONLINE-TRADEOFFB which is defined analogously. Thus, irrespective of who makes the first move, all the m issues are settled at time t = m. THEOREM 6. The time complexity of finding the approximate equilibrium offers of Theorem 5 is linear in m. PROOF. The time complexity of ONLINE-TRADEOFFAand ONLINETRADEOFFB is the same as the time complexity of the fixed-choice online algorithm of [16]. Since the latter has time complexity linear in m, the time complexity of ONLINE-TRADEOFFA and ONLINETRADEOFFB is also linear in m. It is worth noting that, for the 0-1 knapsack problem, the lower bound on the expected difference between the optimum and the solution found by any online algorithm is Ω (1) [16]. Thus, it follows that this lower bound also holds for our negotiation problem. 5. RELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues. We first describe the existing work for the case of divisible issues. Since Schelling [24] first noted that the outcome of negotiation depends on the choice of negotiation procedure, much research effort has been devoted to the study of different procedures for negotiating multiple issues. However, most of this work has focussed on the sequential procedure [7, 2]. For this procedure, a key issue is the negotiation agenda. Here the term agenda refers to the order in which the issues are negotiated. The agenda is important because each agent's cumulative utility depends on the agenda; if we change the agenda then these utilities change. Hence, the agents must decide what agenda they will use. Now, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous). For instance, Fershtman [7] analyze sequential negotiation with exogenous agenda. A number of researchers have also studied negotiations with an endogenous agenda [2]. In contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure. However, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957 where each issue is divisible. Specifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation. Existing work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents. The problem of task allocation has been previously studied in the context of coalitions involving more than two agents. For example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole. In contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities. On the other hand [22] focus on the use of contracts for task allocation to multiple self interested agents but this work concerns finding ways of decommitting contracts (after the initial allocation has been done) so as to improve an agent's utility. In contrast, our focuses on negotiation regarding who will carry out which task. Finally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work). 6. CONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints. The issues are indivisible and different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. Specifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting. We then presented approximately optimal negotiation strategies and showed that they form an equilibrium. These strategies have polynomial time complexity. We also analysed the difference between the true optimum and the approximate optimum. Finally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues. Specifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error. These approximate strategies also have polynomial time complexity. There are several interesting directions for future work. First, for online negotiation, we assumed that the constants kac and kbc are both uniformly distributed. It will be interesting to analyze the case where kac and kbchave other, possibly different, probability distributions. Apart from this, we treated the number of issues as being common knowledge to the agents. In future, it will be interesting to treat the number of issues as uncertain. Approximate and Online Multi-Issue Negotiation ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m> 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are "indivisible" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (√ m). These approximate strategies also have polynomial time complexity. 1. INTRODUCTION Negotiation is a key form of interaction in multiagent systems. It is a process in which disputing agents decide how to divide the gains from cooperation. Since this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them. Now, the simplest form of negotiation involves two agents and a single issue. For example, consider a scenario in which a buyer and a seller negotiate on the price of a good. To begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement. Since agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome. However, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers. That is, they must set the negotiation protocol [20]. On the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation). Given this context, this work focuses on competitive scenarios with self-interested agents. For such cases, each participant defines its strategy so as to maximise its individual utility. However, in most bilateral negotiations, the parties involved need to settle more than one issue. For this case, the issues may be divisible or indivisible [4]. For the former, the problem for the agents is to decide how to split each issue between themselves [21]. For the latter, the individual issues cannot be divided. An issue, in its entirety, must be allocated to either of the two agents. Since the agents value different issues differently, they must come to terms about who will take which issue. To date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6]. However, in many real-world settings, the issues are indivisible. Hence, our focus here is on negotiation for indivisible issues. Such negotiations are very common in multiagent systems. For example, consider the case of task allocation between two agents. There is a set of tasks to be carried out and different agents have different preferences for the tasks. The tasks cannot be partitioned; a task must be carried out by one agent. The problem then is for the agents to negotiate about who will carry out which task. A key problem in the study of multi-issue negotiation is to determine the equilibrium strategies. An equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers. However, such computational issues have so far received little attention. As we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues. Our primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts. Thus, we consider negotiations in which there is incomplete information and time constraints. Incompleteness of information on the part of negotiators is a common feature of most practical negotiations. Also, agents typically have time constraints in the form of both deadlines and discount factors. Deadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit. Likewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation. Moreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues). Given this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents. For this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting. Then, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is 0 (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- / m). These approximate strategies also have polynomial time complexity. In so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria. No previous work has determined these equilibria. Since software agents have limited computational resources, our results are especially relevant to such resource bounded agents. The remainder of the paper is organised as follows. We begin by giving a brief overview of single-issue negotiation in Section 2. In Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem. We then present an approximate equilibrium and evaluate its approximation error. Section 4 analyzes online multi-issue negotiation. Section 5 discusses the related literature and Section 6 concludes. 2. SINGLE-ISSUE NEGOTIATION 952 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. MULTI-ISSUE NEGOTIATION 3.1 The package deal procedure 3.2 Equilibrium strategies The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953 3.3 Approximate equilibrium 954 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955 4. ONLINE MULTI-ISSUE NEGOTIATION 956 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. RELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues. We first describe the existing work for the case of divisible issues. Since Schelling [24] first noted that the outcome of negotiation depends on the choice of negotiation procedure, much research effort has been devoted to the study of different procedures for negotiating multiple issues. However, most of this work has focussed on the sequential procedure [7, 2]. For this procedure, a key issue is the negotiation agenda. Here the term agenda refers to the order in which the issues are negotiated. The agenda is important because each agent's cumulative utility depends on the agenda; if we change the agenda then these utilities change. Hence, the agents must decide what agenda they will use. Now, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous). For instance, Fershtman [7] analyze sequential negotiation with exogenous agenda. A number of researchers have also studied negotiations with an endogenous agenda [2]. In contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure. However, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957 where each issue is divisible. Specifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation. Existing work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents. The problem of task allocation has been previously studied in the context of coalitions involving more than two agents. For example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole. In contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities. On the other hand [22] focus on the use of contracts for task allocation to multiple self interested agents but this work concerns finding ways of decommitting contracts (after the initial allocation has been done) so as to improve an agent's utility. In contrast, our focuses on negotiation regarding who will carry out which task. Finally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work). 6. CONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints. The issues are indivisible and different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. Specifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting. We then presented approximately optimal negotiation strategies and showed that they form an equilibrium. These strategies have polynomial time complexity. We also analysed the difference between the true optimum and the approximate optimum. Finally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues. Specifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error. These approximate strategies also have polynomial time complexity. There are several interesting directions for future work. First, for online negotiation, we assumed that the constants kac and kbc are both uniformly distributed. It will be interesting to analyze the case where kac and kbchave other, possibly different, probability distributions. Apart from this, we treated the number of issues as being common knowledge to the agents. In future, it will be interesting to treat the number of issues as uncertain. Approximate and Online Multi-Issue Negotiation ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m> 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are "indivisible" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (√ m). These approximate strategies also have polynomial time complexity. 1. INTRODUCTION Negotiation is a key form of interaction in multiagent systems. It is a process in which disputing agents decide how to divide the gains from cooperation. Now, the simplest form of negotiation involves two agents and a single issue. However, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers. That is, they must set the negotiation protocol [20]. On the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation). Given this context, this work focuses on competitive scenarios with self-interested agents. For such cases, each participant defines its strategy so as to maximise its individual utility. However, in most bilateral negotiations, the parties involved need to settle more than one issue. For this case, the issues may be divisible or indivisible [4]. For the former, the problem for the agents is to decide how to split each issue between themselves [21]. For the latter, the individual issues cannot be divided. An issue, in its entirety, must be allocated to either of the two agents. Since the agents value different issues differently, they must come to terms about who will take which issue. To date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6]. However, in many real-world settings, the issues are indivisible. Hence, our focus here is on negotiation for indivisible issues. Such negotiations are very common in multiagent systems. For example, consider the case of task allocation between two agents. There is a set of tasks to be carried out and different agents have different preferences for the tasks. The tasks cannot be partitioned; a task must be carried out by one agent. The problem then is for the agents to negotiate about who will carry out which task. A key problem in the study of multi-issue negotiation is to determine the equilibrium strategies. An equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers. However, such computational issues have so far received little attention. As we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues. Thus, we consider negotiations in which there is incomplete information and time constraints. Incompleteness of information on the part of negotiators is a common feature of most practical negotiations. Also, agents typically have time constraints in the form of both deadlines and discount factors. Deadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit. Given this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents. For this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is 0 (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- / m). These approximate strategies also have polynomial time complexity. In so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria. No previous work has determined these equilibria. Since software agents have limited computational resources, our results are especially relevant to such resource bounded agents. We begin by giving a brief overview of single-issue negotiation in Section 2. In Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem. We then present an approximate equilibrium and evaluate its approximation error. Section 4 analyzes online multi-issue negotiation. 5. RELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues. We first describe the existing work for the case of divisible issues. However, most of this work has focussed on the sequential procedure [7, 2]. For this procedure, a key issue is the negotiation agenda. Here the term agenda refers to the order in which the issues are negotiated. The agenda is important because each agent's cumulative utility depends on the agenda; if we change the agenda then these utilities change. Hence, the agents must decide what agenda they will use. Now, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous). For instance, Fershtman [7] analyze sequential negotiation with exogenous agenda. A number of researchers have also studied negotiations with an endogenous agenda [2]. In contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure. However, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl. . Joint Conf. where each issue is divisible. Specifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation. Existing work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents. The problem of task allocation has been previously studied in the context of coalitions involving more than two agents. For example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole. In contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities. In contrast, our focuses on negotiation regarding who will carry out which task. Finally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work). 6. CONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints. The issues are indivisible and different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. Specifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting. We then presented approximately optimal negotiation strategies and showed that they form an equilibrium. These strategies have polynomial time complexity. We also analysed the difference between the true optimum and the approximate optimum. Finally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues. Specifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error. These approximate strategies also have polynomial time complexity. There are several interesting directions for future work. First, for online negotiation, we assumed that the constants kac and kbc are both uniformly distributed. Apart from this, we treated the number of issues as being common knowledge to the agents. In future, it will be interesting to treat the number of issues as uncertain. I-68 On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DEC-MDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC-MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. [ "decentr markov decis process", "decentr markov decis process", "tempor constraint", "agent-coordin problem", "valu function propag", "valu function propag", "decis-theoret model", "decentr partial observ markov decis process", "opportun cost", "polici iter", "rescu mission", "probabl function propag", "multipl", "heurist perform", "multi-agent system", "local optim solut" ] [ "P", "P", "P", "P", "P", "P", "M", "M", "U", "U", "U", "M", "U", "M", "U", "M" ] On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints Janusz Marecki and Milind Tambe Computer Science Department University of Southern California 941 W 37th Place, Los Angeles, CA 90089 {marecki, tambe}@usc. edu ABSTRACT Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC- MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceMulti-agent Systems General Terms Algorithms, Theory 1. INTRODUCTION The development of algorithms for effective coordination of multiple agents acting as a team in uncertain and time critical domains has recently become a very active research field with potential applications ranging from coordination of agents during a hostage rescue mission [11] to the coordination of Autonomous Mars Exploration Rovers [2]. Because of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time. Key decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs). Unfortunately, solving these models optimally has been proven to be NEXP-complete [3], hence more tractable subclasses of these models have been the subject of intensive research. In particular, Network Distributed POMDP [13] which assume that not all the agents interact with each other, Transition Independent DEC-MDP [2] which assume that transition function is decomposable into local transition functions or DEC-MDP with Event Driven Interactions [1] which assume that interactions between agents happen at fixed time points constitute good examples of such subclasses. Although globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks. To remedy that, locally optimal algorithms have been proposed [12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons. Additionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains. OC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration. However, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values. The reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval. Furthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods. This reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies. In this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP. VFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain 830 978-81-904262-7-5 (RPS) c 2007 IFAAMAS and manipulate a value function over time for each method rather than a separate value for each pair of method and time interval. Such representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions. Second, we prove (both theoretically and empirically) that OC-DEC- MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem. This paper is organized as follows: In section 2 we motivate this research by introducing a civilian rescue domain where a team of fire- brigades must coordinate in order to rescue civilians trapped in a burning building. In section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers. Sections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements. Finally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm 2. MOTIVATING EXAMPLE We are interested in domains where multiple agents must coordinate their plans over time, despite uncertainty in plan execution duration and outcome. One example domain is large-scale disaster, like a fire in a skyscraper. Because there can be hundreds of civilians scattered across numerous floors, multiple rescue teams have to be dispatched, and radio communication channels can quickly get saturated and useless. In particular, small teams of fire-brigades must be sent on separate missions to rescue the civilians trapped in dozens of different locations. Picture a small mission plan from Figure (1), where three firebrigades have been assigned a task to rescue the civilians trapped at site B, accessed from site A (e.g. an office accessed from the floor)1 . General fire fighting procedures involve both: (i) putting out the flames, and (ii) ventilating the site to let the toxic, high temperature gases escape, with the restriction that ventilation should not be performed too fast in order to prevent the fire from spreading. The team estimates that the civilians have 20 minutes before the fire at site B becomes unbearable, and that the fire at site A has to be put out in order to open the access to site B. As has happened in the past in large scale disasters, communication often breaks down; and hence we assume in this domain that there is no communication between the fire-brigades 1,2 and 3 (denoted as FB1, FB2 and FB3). Consequently, FB2 does not know if it is already safe to ventilate site A, FB1 does not know if it is already safe to enter site A and start fighting fire at site B, etc.. We assign the reward 50 for evacuating the civilians from site B, and a smaller reward 20 for the successful ventilation of site A, since the civilians themselves might succeed in breaking out from site B. One can clearly see the dilemma, that FB2 faces: It can only estimate the durations of the Fight fire at site A methods to be executed by FB1 and FB3, and at the same time FB2 knows that time is running out for civilians. If FB2 ventilates site A too early, the fire will spread out of control, whereas if FB2 waits with the ventilation method for too long, fire at site B will become unbearable for the civilians. In general, agents have to perform a sequence of such 1 We explain the EST and LET notation in section 3 Figure 1: Civilian rescue domain and a mission plan. Dotted arrows represent implicit precedence constraints within an agent. difficult decisions; in particular, decision process of FB2 involves first choosing when to start ventilating site A, and then (depending on the time it took to ventilate site A), choosing when to start evacuating the civilians from site B. Such sequence of decisions constitutes the policy of an agent, and it must be found fast because time is running out. 3. MODEL DESCRIPTION We encode our decision problems in a model which we refer to as Decentralized MDP with Temporal Constraints 2 . Each instance of our decision problems can be described as a tuple M, A, C, P, R where M = {mi} |M| i=1 is the set of methods, and A = {Ak} |A| k=1 is the set of agents. Agents cannot communicate during mission execution. Each agent Ak is assigned to a set Mk of methods, such that S|A| k=1 Mk = M and ∀i,j;i=jMi ∩ Mj = ø. Also, each method of agent Ak can be executed only once, and agent Ak can execute only one method at a time. Method execution times are uncertain and P = {pi} |M| i=1 is the set of distributions of method execution durations. In particular, pi(t) is the probability that the execution of method mi consumes time t. C is a set of temporal constraints in the system. Methods are partially ordered and each method has fixed time windows inside which it can be executed, i.e., C = C≺ ∪ C[ ] where C≺ is the set of predecessor constraints and C[ ] is the set of time window constraints. For c ∈ C≺, c = mi, mj means that method mi precedes method mj i.e., execution of mj cannot start before mi terminates. In particular, for an agent Ak, all its methods form a chain linked by predecessor constraints. We assume, that the graph G = M, C≺ is acyclic, does not have disconnected nodes (the problem cannot be decomposed into independent subproblems), and its source and sink vertices identify the source and sink methods of the system. For c ∈ C[ ], c = mi, EST, LET means that execution of mi can only start after the Earliest Starting Time EST and must finish before the Latest End Time LET; we allow methods to have multiple disjoint time window constraints. Although distributions pi can extend to infinite time horizons, given the time window constraints, the planning horizon Δ = max m,τ,τ ∈C[ ] τ is considered as the mission deadline. Finally, R = {ri} |M| i=1 is the set of non-negative rewards, i.e., ri is obtained upon successful execution of mi. Since there is no communication allowed, an agent can only estimate the probabilities that its methods have already been enabled 2 One could also use the OC-DEC-MDP framework, which models both time and resource constraints The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831 by other agents. Consequently, if mj ∈ Mk is the next method to be executed by the agent Ak and the current time is t ∈ [0, Δ], the agent has to make a decision whether to Execute the method mj (denoted as E), or to Wait (denoted as W). In case agent Ak decides to wait, it remains idle for an arbitrary small time , and resumes operation at the same place (= about to execute method mj) at time t + . In case agent Ak decides to Execute the next method, two outcomes are possible: Success: The agent Ak receives reward rj and moves on to its next method (if such method exists) so long as the following conditions hold: (i) All the methods {mi| mi, mj ∈ C≺} that directly enable method mj have already been completed, (ii) Execution of method mj started in some time window of method mj, i.e., ∃ mj ,τ,τ ∈C[ ] such that t ∈ [τ, τ ], and (iii) Execution of method mj finished inside the same time window, i.e., agent Ak completed method mj in time less than or equal to τ − t. Failure: If any of the above-mentioned conditions does not hold, agent Ak stops its execution. Other agents may continue their execution, but methods mk ∈ {m| mj, m ∈ C≺} will never become enabled. The policy πk of an agent Ak is a function πk : Mk × [0, Δ] → {W, E}, and πk( m, t ) = a means, that if Ak is at method m at time t, it will choose to perform the action a. A joint policy π = [πk] |A| k=1 is considered to be optimal (denoted as π∗ ), if it maximizes the sum of expected rewards for all the agents. 4. SOLUTION TECHNIQUES 4.1 Optimal Algorithms Optimal joint policy π∗ is usually found by using the Bellman update principle, i.e., in order to determine the optimal policy for method mj, optimal policies for methods mk ∈ {m| mj, m ∈ C≺} are used. Unfortunately, for our model, the optimal policy for method mj also depends on policies for methods mi ∈ {m| m, mj ∈ C≺}. This double dependency results from the fact, that the expected reward for starting the execution of method mj at time t also depends on the probability that method mj will be enabled by time t. Consequently, if time is discretized, one needs to consider Δ|M| candidate policies in order to find π∗ . Thus, globally optimal algorithms used for solving real-world problems are unlikely to terminate in reasonable time [11]. The complexity of our model could be reduced if we considered its more restricted version; in particular, if each method mj was allowed to be enabled at time points t ∈ Tj ⊂ [0, Δ], the Coverage Set Algorithm (CSA) [1] could be used. However, CSA complexity is double exponential in the size of Ti, and for our domains Tj can store all values ranging from 0 to Δ. 4.2 Locally Optimal Algorithms Following the limited applicability of globally optimal algorithms for DEC-MDPs with Temporal Constraints, locally optimal algorithms appear more promising. Specially, the OC-DEC-MDP algorithm [4] is particularly significant, as it has shown to easily scale up to domains with hundreds of methods. The idea of the OC-DECMDP algorithm is to start with the earliest starting time policy π0 (according to which an agent will start executing the method m as soon as m has a non-zero chance of being already enabled), and then improve it iteratively, until no further improvement is possible. At each iteration, the algorithm starts with some policy π, which uniquely determines the probabilities Pi,[τ,τ ] that method mi will be performed in the time interval [τ, τ ]. It then performs two steps: Step 1: It propagates from sink methods to source methods the values Vi,[τ,τ ], that represent the expected utility for executing method mi in the time interval [τ, τ ]. This propagation uses the probabilities Pi,[τ,τ ] from previous algorithm iteration. We call this step a value propagation phase. Step 2: Given the values Vi,[τ,τ ] from Step 1, the algorithm chooses the most profitable method execution intervals which are stored in a new policy π . It then propagates the new probabilities Pi,[τ,τ ] from source methods to sink methods. We call this step a probability propagation phase. If policy π does not improve π, the algorithm terminates. There are two shortcomings of the OC-DEC-MDP algorithm that we address in this paper. First, each of OC-DEC-MDP states is a pair mj, [τ, τ ] , where [τ, τ ] is a time interval in which method mj can be executed. While such state representation is beneficial, in that the problem can be solved with a standard value iteration algorithm, it blurs the intuitive mapping from time t to the expected total reward for starting the execution of mj at time t. Consequently, if some method mi enables method mj, and the values Vj,[τ,τ ]∀τ,τ ∈[0,Δ] are known, the operation that calculates the values Vi,[τ,τ ]∀τ, τ ∈ [0, Δ] (during the value propagation phase), runs in time O(I2 ), where I is the number of time intervals 3 . Since the runtime of the whole algorithm is proportional to the runtime of this operation, especially for big time horizons Δ, the OC- DECMDP algorithm runs slow. Second, while OC-DEC-MDP emphasizes on precise calculation of values Vj,[τ,τ ], it fails to address a critical issue that determines how the values Vj,[τ,τ ] are split given that the method mj has multiple enabling methods. As we show later, OC-DEC-MDP splits Vj,[τ,τ ] into parts that may overestimate Vj,[τ,τ ] when summed up again. As a result, methods that precede the method mj overestimate the value for enabling mj which, as we show later, can have disastrous consequences. In the next two sections, we address both of these shortcomings. 5. VALUE FUNCTION PROPAGATION (VFP) The general scheme of the VFP algorithm is identical to the OCDEC-MDP algorithm, in that it performs a series of policy improvement iterations, each one involving a Value and Probability Propagation Phase. However, instead of propagating separate values, VFP maintains and propagates the whole functions, we therefore refer to these phases as the value function propagation phase and the probability function propagation phase. To this end, for each method mi ∈ M, we define three new functions: Value Function, denoted as vi(t), that maps time t ∈ [0, Δ] to the expected total reward for starting the execution of method mi at time t. Opportunity Cost Function, denoted as Vi(t), that maps time t ∈ [0, Δ] to the expected total reward for starting the execution of method mi at time t assuming that mi is enabled. Probability Function, denoted as Pi(t), that maps time t ∈ [0, Δ] to the probability that method mi will be completed before time t. Such functional representation allows us to easily read the current policy, i.e., if an agent Ak is at method mi at time t, then it will wait as long as value function vi(t) will be greater in the future. Formally: πk( mi, t ) = j W if ∃t >t such that vi(t) < vi(t ) E otherwise. We now develop an analytical technique for performing the value function and probability function propagation phases. 3 Similarly for the probability propagation phase 832 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Value Function Propagation Phase Suppose, that we are performing a value function propagation phase during which the value functions are propagated from the sink methods to the source methods. At any time during this phase we encounter a situation shown in Figure 2, where opportunity cost functions [Vjn ]N n=0 of methods [mjn ]N n=0 are known, and the opportunity cost Vi0 of method mi0 is to be derived. Let pi0 be the probability distribution function of method mi0 execution duration, and ri0 be the immediate reward for starting and completing the execution of method mi0 inside a time interval [τ, τ ] such that mi0 τ, τ ∈ C[ ]. The function Vi0 is then derived from ri0 and opportunity costs Vjn,i0 (t) n = 1, ..., N from future methods. Formally: Vi0 (t) = 8 >>< >>: R τ −t 0 pi0 (t )(ri0 + PN n=0 Vjn,i0 (t + t ))dt if ∃ mi0 τ,τ ∈C[ ] such that t ∈ [τ, τ ] 0 otherwise (1) Note, that for t ∈ [τ, τ ], if h(t) := ri0 + PN n=0 Vjn,i0 (τ −t) then Vi0 is a convolution of p and h: vi0 (t) = (pi0 ∗h)(τ −t). Assume for now, that Vjn,i0 represents a full opportunity cost, postponing the discussion on different techniques for splitting the opportunity cost Vj0 into [Vj0,ik ]K k=0 until section 6. We now show how to derive Vj0,i0 (derivation of Vjn,i0 for n = 0 follows the same scheme). Figure 2: Fragment of an MDP of agent Ak. Probability functions propagate forward (left to right) whereas value functions propagate backward (right to left). Let V j0,i0 (t) be the opportunity cost of starting the execution of method mj0 at time t given that method mi0 has been completed. It is derived by multiplying Vi0 by the probability functions of all methods other than mi0 that enable mj0 . Formally: V j0,i0 (t) = Vj0 (t) · KY k=1 Pik (t). Where similarly to [4] and [5] we ignored the dependency of [Plk ]K k=1. Observe that V j0,i0 does not have to be monotonically decreasing, i.e., delaying the execution of the method mi0 can sometimes be profitable. Therefore the opportunity cost Vj0,i0 (t) of enabling method mi0 at time t must be greater than or equal to V j0,i0 . Furthermore, Vj0,i0 should be non-increasing. Formally: Vj0,i0 = min f∈F f (2) Where F = {f | f ≥ V j0,i0 and f(t) ≥ f(t ) ∀t<t }. Knowing the opportunity cost Vi0 , we can then easily derive the value function vi0 . Let Ak be an agent assigned to the method mi0 . If Ak is about to start the execution of mi0 it means, that Ak must have completed its part of the mission plan up to the method mi0 . Since Ak does not know if other agents have completed methods [mlk ]k=K k=1 , in order to derive vi0 , it has to multiply Vi0 by the probability functions of all methods of other agents that enable mi0 . Formally: vi0 (t) = Vi0 (t) · KY k=1 Plk (t) Where the dependency of [Plk ]K k=1 is also ignored. We have consequently shown a general scheme how to propagate the value functions: Knowing [vjn ]N n=0 and [Vjn ]N n=0 of methods [mjn ]N n=0 we can derive vi0 and Vi0 of method mi0 . In general, the value function propagation scheme starts with sink nodes. It then visits at each time a method m, such that all the methods that m enables have already been marked as visited. The value function propagation phase terminates when all the source methods have been marked as visited. 5.2 Reading the Policy In order to determine the policy of agent Ak for the method mj0 we must identify the set Zj0 of intervals [z, z ] ⊂ [0, ..., Δ], such that: ∀t∈[z,z ] πk( mj0 , t ) = W. One can easily identify the intervals of Zj0 by looking at the time intervals in which the value function vj0 does not decrease monotonically. 5.3 Probability Function Propagation Phase Assume now, that value functions and opportunity cost values have all been propagated from sink methods to source nodes and the sets Zj for all methods mj ∈ M have been identified. Since value function propagation phase was using probabilities Pi(t) for methods mi ∈ M and times t ∈ [0, Δ] found at previous algorithm iteration, we now have to find new values Pi(t), in order to prepare the algorithm for its next iteration. We now show how in the general case (Figure 2) propagate the probability functions forward through one method, i.e., we assume that the probability functions [Pik ]K k=0 of methods [mik ]K k=0 are known, and the probability function Pj0 of method mj0 must be derived. Let pj0 be the probability distribution function of method mj0 execution duration, and Zj0 be the set of intervals of inactivity for method mj0 , found during the last value function propagation phase. If we ignore the dependency of [Pik ]K k=0 then the probability Pj0 (t) that the execution of method mj0 starts before time t is given by: Pj0 (t) = (QK k=0 Pik (τ) if ∃(τ, τ ) ∈ Zj0 s.t. t ∈ (τ, τ ) QK k=0 Pik (t) otherwise. Given Pj0 (t), the probability Pj0 (t) that method mj0 will be completed by time t is derived by: Pj0 (t) = Z t 0 Z t 0 ( ∂Pj0 ∂t )(t ) · pj0 (t − t )dt dt (3) Which can be written compactly as ∂Pj0 ∂t = pj0 ∗ ∂P j0 ∂t . We have consequently shown how to propagate the probability functions [Pik ]K k=0 of methods [mik ]K k=0 to obtain the probability function Pj0 of method mj0 . The general, the probability function propagation phase starts with source methods msi for which we know that Psi = 1 since they are enabled by default. We then visit at each time a method m such that all the methods that enable The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 833 m have already been marked as visited. The probability function propagation phase terminates when all the sink methods have been marked as visited. 5.4 The Algorithm Similarly to the OC-DEC-MDP algorithm, VFP starts the policy improvement iterations with the earliest starting time policy π0 . Then at each iteration it: (i) Propagates the value functions [vi] |M| i=1 using the old probability functions [Pi] |M| i=1 from previous algorithm iteration and establishes the new sets [Zi] |M| i=1 of method inactivity intervals, and (ii) propagates the new probability functions [Pi ] |M| i=1 using the newly established sets [Zi] |M| i=1. These new functions [Pi ] |M| i=1 are then used in the next iteration of the algorithm. Similarly to OC-DEC-MDP, VFP terminates if a new policy does not improve the policy from the previous algorithm iteration. 5.5 Implementation of Function Operations So far, we have derived the functional operations for value function and probability function propagation without choosing any function representation. In general, our functional operations can handle continuous time, and one has freedom to choose a desired function approximation technique, such as piecewise linear [7] or piecewise constant [9] approximation. However, since one of our goals is to compare VFP with the existing OC-DEC- MDP algorithm, that works only for discrete time, we also discretize time, and choose to approximate value functions and probability functions with piecewise linear (PWL) functions. When the VFP algorithm propagates the value functions and probability functions, it constantly carries out operations represented by equations (1) and (3) and we have already shown that these operations are convolutions of some functions p(t) and h(t). If time is discretized, functions p(t) and h(t) are discrete; however, h(t) can be nicely approximated with a PWL function bh(t), which is exactly what VFP does. As a result, instead of performing O(Δ2 ) multiplications to compute f(t), VFP only needs to perform O(k · Δ) multiplications to compute f(t), where k is the number of linear segments of bh(t) (note, that since h(t) is monotonic, bh(t) is usually close to h(t) with k Δ). Since Pi values are in range [0, 1] and Vi values are in range [0, P mi∈M ri], we suggest to approximate Vi(t) with bVi(t) within error V , and Pi(t) with bPi(t) within error P . We now prove that the overall approximation error accumulated during the value function propagation phase can be expressed in terms of P and V : THEOREM 1. Let C≺ be a set of precedence constraints of a DEC-MDP with Temporal Constraints, and P and V be the probability function and value function approximation errors respectively. The overall error π = maxV supt∈[0,Δ]|V (t) − bV (t)| of value function propagation phase is then bounded by: |C≺| V + ((1 + P )|C≺| − 1) P mi∈M ri . PROOF. In order to establish the bound for π, we first prove by induction on the size of C≺, that the overall error of probability function propagation phase, π(P ) = maxP supt∈[0,Δ]|P(t) − bP(t)| is bounded by (1 + P )|C≺| − 1. Induction base: If n = 1 only two methods are present, and we will perform the operation identified by Equation (3) only once, introducing the error π(P ) = P = (1 + P )|C≺| − 1. Induction step: Suppose, that π(P ) for |C≺| = n is bounded by (1 + P )n − 1, and we want to prove that this statement holds for |C≺| = n. Let G = M, C≺ be a graph with at most n + 1 edges, and G = M, C≺ be a subgraph of G, such that C≺ = C≺ − { mi, mj }, where mj ∈ M is a sink node in G. From the induction assumption we have, that C≺ introduces the probability propagation phase error bounded by (1 + P )n − 1. We now add back the link { mi, mj } to C≺, which affects the error of only one probability function, namely Pj, by a factor of (1 + P ). Since probability propagation phase error in C≺ was bounded by (1 + P )n − 1, in C≺ = C≺ ∪ { mi, mj } it can be at most ((1 + P )n − 1)(1 + P ) < (1 + P )n+1 − 1. Thus, if opportunity cost functions are not overestimated, they are bounded by P mi∈M ri and the error of a single value function propagation operation will be at most Z Δ 0 p(t)( V +((1+ P ) |C≺| −1) X mi∈M ri) dt < V +((1+ P ) |C≺| −1) X mi∈M ri. Since the number of value function propagation operations is |C≺|, the total error π of the value function propagation phase is bounded by: |C≺| V + ((1 + P )|C≺| − 1) P mi∈M ri . 6. SPLITTING THE OPPORTUNITY COST FUNCTIONS In section 5 we left out the discussion about how the opportunity cost function Vj0 of method mj0 is split into opportunity cost functions [Vj0,ik ]K k=0 sent back to methods [mik ]K k=0 , that directly enable method mj0 . So far, we have taken the same approach as in [4] and [5] in that the opportunity cost function Vj0,ik that the method mik sends back to the method mj0 is a minimal, non-increasing function that dominates function V j0,ik (t) = (Vj0 · Q k ∈{0,...,K} k =k Pik )(t). We refer to this approach, as heuristic H 1,1 . Before we prove that this heuristic overestimates the opportunity cost, we discuss three problems that might occur when splitting the opportunity cost functions: (i) overestimation, (ii) underestimation and (iii) starvation. Consider the situation in Figure Figure 3: Splitting the value function of method mj0 among methods [mik ]K k=0. (3) when value function propagation for methods [mik ]K k=0 is performed. For each k = 0, ..., K, Equation (1) derives the opportunity cost function Vik from immediate reward rk and opportunity cost function Vj0,ik . If m0 is the only methods that precedes method mk, then V ik,0 = Vik is propagated to method m0, and consequently the opportunity cost for completing the method m0 at time t is equal to PK k=0 Vik,0(t). If this cost is overestimated, then an agent A0 at method m0 will have too much incentive to finish the execution of m0 at time t. Consequently, although the probability P(t) that m0 will be enabled by other agents by time t is low, agent A0 might still find the expected utility of starting the execution of m0 at time t higher than the expected utility of doing it later. As a result, it will choose at time t to start executing method m0 instead of waiting, which can have disastrous consequences. Similarly, if PK k=0 Vik,0(t) is underestimated, agent A0 might loose interest in enabling the future methods [mik ]K k=0 and just focus on 834 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) maximizing the chance of obtaining its immediate reward r0. Since this chance is increased when agent A0 waits4 , it will consider at time t to be more profitable to wait, instead of starting the execution of m0, which can have similarly disastrous consequences. Finally, if Vj0 is split in a way, that for some k, Vj0,ik = 0, it is the method mik that underestimates the opportunity cost of enabling method mj0 , and the similar reasoning applies. We call such problem a starvation of method mk. That short discussion shows the importance of splitting the opportunity cost function Vj0 in such a way, that overestimation, underestimation, and starvation problem is avoided. We now prove that: THEOREM 2. Heuristic H 1,1 can overestimate the opportunity cost. PROOF. We prove the theorem by showing a case where the overestimation occurs. For the mission plan from Figure (3), let H 1,1 split Vj0 into [V j0,ik = Vj0 · Q k ∈{0,...,K} k =k Pik ]K k=0 sent to methods [mik ]K k=0 respectively. Also, assume that methods [mik ]K k=0 provide no local reward and have the same time windows, i.e., rik = 0; ESTik = 0, LETik = Δ for k = 0, ..., K. To prove the overestimation of opportunity cost, we must identify t0 ∈ [0, ..., Δ] such that the opportunity cost PK k=0 Vik (t) for methods [mik ]K k=0 at time t ∈ [0, . . , Δ] is greater than the opportunity cost Vj0 (t). From Equation (1) we have: Vik (t) = Z Δ−t 0 pik (t )Vj0,ik (t + t )dt Summing over all methods [mik ]K k=0 we obtain: KX k=0 Vik (t) = KX k=0 Z Δ−t 0 pik (t )Vj0,ik (t + t )dt (4) ≥ KX k=0 Z Δ−t 0 pik (t )V j0,ik (t + t )dt = KX k=0 Z Δ−t 0 pik (t )Vj0 (t + t ) Y k ∈{0,...,K} k =k Pik (t + t )dt Let c ∈ (0, 1] be a constant and t0 ∈ [0, Δ] be such that ∀t>t0 and ∀k=0,. . ,K we have Q k ∈{0,...,K} k =k Pik (t) > c. Then: KX k=0 Vik (t0) > KX k=0 Z Δ−t0 0 pik (t )Vj0 (t0 + t ) · c dt Because Pjk is non-decreasing. Now, suppose there exists t1 ∈ (t0, Δ], such that PK k=0 R t1−t0 0 pik (t )dt > Vj0 (t0) c·Vj0 (t1) . Since decreasing the upper limit of the integral over positive function also decreases the integral, we have: KX k=0 Vik (t0) > c KX k=0 Z t1 t0 pik (t − t0)Vj0 (t )dt And since Vj0 (t ) is non-increasing we have: KX k=0 Vik (t0) > c · Vj0 (t1) KX k=0 Z t1 t0 pik (t − t0)dt (5) = c · Vj0 (t1) KX k=0 Z t1−t0 0 pik (t )dt > c · Vj0 (t1) Vj(t0) c · Vj(t1) = Vj(t0) 4 Assuming LET0 t Consequently, the opportunity cost PK k=0 Vik (t0) of starting the execution of methods [mik ]K k=0 at time t ∈ [0, . . , Δ] is greater than the opportunity cost Vj0 (t0) which proves the theorem.Figure 4 shows that the overestimation of opportunity cost is easily observable in practice. To remedy the problem of opportunity cost overestimation, we propose three alternative heuristics that split the opportunity cost functions: • Heuristic H 1,0 : Only one method, mik gets the full expected reward for enabling method mj0 , i.e., V j0,ik (t) = 0 for k ∈ {0, ..., K}\{k} and V j0,ik (t) = (Vj0 · Q k ∈{0,...,K} k =k Pik )(t). • Heuristic H 1/2,1/2 : Each method [mik ]K k=0 gets the full opportunity cost for enabling method mj0 divided by the number K of methods enabling the method mj0 , i.e., V j0,ik (t) = 1 K (Vj0 · Q k ∈{0,...,K} k =k Pik )(t) for k ∈ {0, ..., K}. • Heuristic bH 1,1 : This is a normalized version of the H 1,1 heuristic in that each method [mik ]K k=0 initially gets the full opportunity cost for enabling the method mj0 . To avoid opportunity cost overestimation, we normalize the split functions when their sum exceeds the opportunity cost function to be split. Formally: V j0,ik (t) = 8 >< >: V H 1,1 j0,ik (t) if PK k=0 V H 1,1 j0,ik (t) < Vj0 (t) Vj0 (t) V H 1,1 j0,ik (t) PK k=0 V H 1,1 j0,ik (t) otherwise Where V H 1,1 j0,ik (t) = (Vj0 · Q k ∈{0,...,K} k =k Pjk )(t). For the new heuristics, we now prove, that: THEOREM 3. Heuristics H 1,0 , H 1/2,1/2 and bH 1,1 do not overestimate the opportunity cost. PROOF. When heuristic H 1,0 is used to split the opportunity cost function Vj0 , only one method (e.g. mik ) gets the opportunity cost for enabling method mj0 . Thus: KX k =0 Vik (t) = Z Δ−t 0 pik (t )Vj0,ik (t + t )dt (6) And since Vj0 is non-increasing ≤ Z Δ−t 0 pik (t )Vj0 (t + t ) · Y k ∈{0,...,K} k =k Pjk (t + t )dt ≤ Z Δ−t 0 pik (t )Vj0 (t + t )dt ≤ Vj0 (t) The last inequality is also a consequence of the fact that Vj0 is non-increasing. For heuristic H 1/2,1/2 we similarly have: KX k=0 Vik (t) ≤ KX k=0 Z Δ−t 0 pik (t ) 1 K Vj0 (t + t ) Y k ∈{0,...,K} k =k Pjk (t + t )dt ≤ 1 K KX k=0 Z Δ−t 0 pik (t )Vj0 (t + t )dt ≤ 1 K · K · Vj0 (t) = Vj0 (t). For heuristic bH 1,1 , the opportunity cost function Vj0 is by definition split in such manner, that PK k=0 Vik (t) ≤ Vj0 (t). Consequently, we have proved, that our new heuristics H 1,0 , H 1/2,1/2 and bH 1,1 avoid the overestimation of the opportunity cost. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835 The reason why we have introduced all three new heuristics is the following: Since H 1,1 overestimates the opportunity cost, one has to choose which method mik will receive the reward from enabling the method mj0 , which is exactly what the heuristic H 1,0 does. However, heuristic H 1,0 leaves K − 1 methods that precede the method mj0 without any reward which leads to starvation. Starvation can be avoided if opportunity cost functions are split using heuristic H 1/2,1/2 , that provides reward to all enabling methods. However, the sum of split opportunity cost functions for the H 1/2,1/2 heuristic can be smaller than the non-zero split opportunity cost function for the H 1,0 heuristic, which is clearly undesirable. Such situation (Figure 4, heuristic H 1,0 ) occurs because the mean f+g 2 of two functions f, g is not smaller than f nor g only if f = g. This is why we have proposed the bH 1,1 heuristic, which by definition avoids the overestimation, underestimation and starvation problems. 7. EXPERIMENTAL EVALUATION Since the VFP algorithm that we introduced provides two orthogonal improvements over the OC-DEC-MDP algorithm, the experimental evaluation we performed consisted of two parts: In part 1, we tested empirically the quality of solutions that an locally optimal solver (either OC-DEC-MDP or VFP) finds, given it uses different opportunity cost function splitting heuristic, and in part 2, we compared the runtimes of the VFP and OC-DEC- MDP algorithms for a variety of mission plan configurations. Part 1: We first ran the VFP algorithm on a generic mission plan configuration from Figure 3 where only methods mj0 , mi1 , mi2 and m0 were present. Time windows of all methods were set to 400, duration pj0 of method mj0 was uniform, i.e., pj0 (t) = 1 400 and durations pi1 , pi2 of methods mi1 , mi2 were normal distributions, i.e., pi1 = N(μ = 250, σ = 20), and pi2 = N(μ = 200, σ = 100). We assumed that only method mj0 provided reward, i.e. rj0 = 10 was the reward for finishing the execution of method mj0 before time t = 400. We show our results in Figure (4) where the x-axis of each of the graphs represents time whereas the y-axis represents the opportunity cost. The first graph confirms, that when the opportunity cost function Vj0 was split into opportunity cost functions Vi1 and Vi2 using the H 1,1 heuristic, the function Vi1 +Vi2 was not always below the Vj0 function. In particular, Vi1 (280) + Vi2 (280) exceeded Vj0 (280) by 69%. When heuristics H 1,0 , H 1/2,1/2 and bH 1,1 were used (graphs 2,3 and 4), the function Vi1 + Vi2 was always below Vj0 . We then shifted our attention to the civilian rescue domain introduced in Figure 1 for which we sampled all action execution durations from the normal distribution N = (μ = 5, σ = 2)). To obtain the baseline for the heuristic performance, we implemented a globally optimal solver, that found a true expected total reward for this domain (Figure (6a)). We then compared this reward with a expected total reward found by a locally optimal solver guided by each of the discussed heuristics. Figure (6a), which plots on the y-axis the expected total reward of a policy complements our previous results: H 1,1 heuristic overestimated the expected total reward by 280% whereas the other heuristics were able to guide the locally optimal solver close to a true expected total reward. Part 2: We then chose H 1,1 to split the opportunity cost functions and conducted a series of experiments aimed at testing the scalability of VFP for various mission plan configurations, using the performance of the OC-DEC-MDP algorithm as a benchmark. We began the VFP scalability tests with a configuration from Figure (5a) associated with the civilian rescue domain, for which method execution durations were extended to normal distributions N(μ = Figure 5: Mission plan configurations: (a) civilian rescue domain, (b) chain of n methods, (c) tree of n methods with branching factor = 3 and (d) square mesh of n methods. Figure 6: VFP performance in the civilian rescue domain. 30, σ = 5), and the deadline was extended to Δ = 200. We decided to test the runtime of the VFP algorithm running with three different levels of accuracy, i.e., different approximation parameters P and V were chosen, such that the cumulative error of the solution found by VFP stayed within 1%, 5% and 10% of the solution found by the OC- DEC-MDP algorithm. We then run both algorithms for a total of 100 policy improvement iterations. Figure (6b) shows the performance of the VFP algorithm in the civilian rescue domain (y-axis shows the runtime in milliseconds). As we see, for this small domain, VFP runs 15% faster than OCDEC-MDP when computing the policy with an error of less than 1%. For comparison, the globally optimal solved did not terminate within the first three hours of its runtime which shows the strength of the opportunistic solvers, like OC-DEC-MDP. We next decided to test how VFP performs in a more difficult domain, i.e., with methods forming a long chain (Figure (5b)). We tested chains of 10, 20 and 30 methods, increasing at the same time method time windows to 350, 700 and 1050 to ensure that later methods can be reached. We show the results in Figure (7a), where we vary on the x-axis the number of methods and plot on the y-axis the algorithm runtime (notice the logarithmic scale). As we observe, scaling up the domain reveals the high performance of VFP: Within 1% error, it runs up to 6 times faster than OC-DECMDP. We then tested how VFP scales up, given that the methods are arranged into a tree (Figure (5c)). In particular, we considered trees with branching factor of 3, and depth of 2, 3 and 4, increasing at the same time the time horizon from 200 to 300, and then to 400. We show the results in Figure (7b). Although the speedups are smaller than in case of a chain, the VFP algorithm still runs up to 4 times faster than OC-DEC-MDP when computing the policy with an error of less than 1%. We finally tested how VFP handles the domains with methods arranged into a n × n mesh, i.e., C≺ = { mi,j, mk,j+1 } for i = 1, ..., n; k = 1, ..., n; j = 1, ..., n − 1. In particular, we consider 836 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 4: Visualization of heuristics for opportunity costs splitting. Figure 7: Scalability experiments for OC-DEC-MDP and VFP for different network configurations. meshes of 3×3, 4×4, and 5×5 methods. For such configurations we have to greatly increase the time horizon since the probabilities of enabling the final methods by a particular time decrease exponentially. We therefore vary the time horizons from 3000 to 4000, and then to 5000. We show the results in Figure (7c) where, especially for larger meshes, the VFP algorithm runs up to one order of magnitude faster than OC-DEC-MDP while finding a policy that is within less than 1% from the policy found by OC- DECMDP. 8. CONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains. In this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP. In terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11]. Unfortunately, they fail to scale up to large-scale domains at present time. Beyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints. Finally, value function techniques have been studied in context of single agent MDPs [7] [9]. However, similarly to [6], they fail to address the lack of global state knowledge, which is a fundamental issue in decentralized planning. Acknowledgments This material is based upon work supported by the DARPA/IPTO COORDINATORS program and the Air Force Research Laboratory under Contract No. FA875005C0030. The authors also want to thank Sven Koenig and anonymous reviewers for their valuable comments. 9. REFERENCES [1] R. Becker, V. Lesser, and S. Zilberstein. Decentralized MDPs with Event-Driven Interactions. In AAMAS, pages 302-309, 2004. [2] R. Becker, S. Zilberstein, V. Lesser, and C. V. Goldman. Transition-Independent Decentralized Markov Decision Processes. In AAMAS, pages 41-48, 2003. [3] D. S. Bernstein, S. Zilberstein, and N. Immerman. The complexity of decentralized control of Markov decision processes. In UAI, pages 32-37, 2000. [4] A. Beynier and A. Mouaddib. A polynomial algorithm for decentralized Markov decision processes with temporal constraints. In AAMAS, pages 963-969, 2005. [5] A. Beynier and A. Mouaddib. An iterative algorithm for solving constrained decentralized Markov decision processes. In AAAI, pages 1089-1094, 2006. [6] C. Boutilier. Sequential optimality and coordination in multiagent systems. In IJCAI, pages 478-485, 1999. [7] J. Boyan and M. Littman. Exact solutions to time-dependent MDPs. In NIPS, pages 1026-1032, 2000. [8] C. Goldman and S. Zilberstein. Optimizing information exchange in cooperative multi-agent systems, 2003. [9] L. Li and M. Littman. Lazy approximation for solving continuous finite-horizon MDPs. In AAAI, pages 1175-1180, 2005. [10] Y. Liu and S. Koenig. Risk-sensitive planning with one-switch utility functions: Value iteration. In AAAI, pages 993-999, 2005. [11] D. Musliner, E. Durfee, J. Wu, D. Dolgov, R. Goldman, and M. Boddy. Coordinated plan management using multiagent MDPs. In AAAI Spring Symposium, 2006. [12] R. Nair, M. Tambe, M. Yokoo, D. Pynadath, and S. Marsella. Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings. In IJCAI, pages 705-711, 2003. [13] R. Nair, P. Varakantham, M. Tambe, and M. Yokoo. Networked distributed POMDPs: A synergy of distributed constraint optimization and POMDPs. In IJCAI, pages 1758-1760, 2005. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 837 On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints ABSTRACT Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. 1. INTRODUCTION The development of algorithms for effective coordination of multiple agents acting as a team in uncertain and time critical domains has recently become a very active research field with potential applications ranging from coordination of agents during a hostage rescue mission [11] to the coordination of Autonomous Mars Explo ration Rovers [2]. Because of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time. Key decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs). Unfortunately, solving these models optimally has been proven to be NEXP-complete [3], hence more tractable subclasses of these models have been the subject of intensive research. In particular, Network Distributed POMDP [13] which assume that not all the agents interact with each other, Transition Independent DEC-MDP [2] which assume that transition function is decomposable into local transition functions or DEC-MDP with Event Driven Interactions [1] which assume that interactions between agents happen at fixed time points constitute good examples of such subclasses. Although globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks. To remedy that, locally optimal algorithms have been proposed [12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons. Additionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains. OC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration. However, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values. The reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval. Furthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods. This reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies. In this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP. VFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain and manipulate a value function over time for each method rather than a separate value for each pair of method and time interval. Such representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions. Second, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem. This paper is organized as follows: In section 2 we motivate this research by introducing a civilian rescue domain where a team of fire - brigades must coordinate in order to rescue civilians trapped in a burning building. In section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers. Sections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements. Finally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm 2. MOTIVATING EXAMPLE We are interested in domains where multiple agents must coordinate their plans over time, despite uncertainty in plan execution duration and outcome. One example domain is large-scale disaster, like a fire in a skyscraper. Because there can be hundreds of civilians scattered across numerous floors, multiple rescue teams have to be dispatched, and radio communication channels can quickly get saturated and useless. In particular, small teams of fire-brigades must be sent on separate missions to rescue the civilians trapped in dozens of different locations. Picture a small mission plan from Figure (1), where three firebrigades have been assigned a task to rescue the civilians trapped at site B, accessed from site A (e.g. an office accessed from the floor) 1. General fire fighting procedures involve both: (i) putting out the flames, and (ii) ventilating the site to let the toxic, high temperature gases escape, with the restriction that ventilation should not be performed too fast in order to prevent the fire from spreading. The team estimates that the civilians have 20 minutes before the fire at site B becomes unbearable, and that the fire at site A has to be put out in order to open the access to site B. As has happened in the past in large scale disasters, communication often breaks down; and hence we assume in this domain that there is no communication between the fire-brigades 1,2 and 3 (denoted as FB1, FB2 and FB3). Consequently, FB2 does not know if it is already safe to ventilate site A, FB1 does not know if it is already safe to enter site A and start fighting fire at site B, etc. . We assign the reward 50 for evacuating the civilians from site B, and a smaller reward 20 for the successful ventilation of site A, since the civilians themselves might succeed in breaking out from site B. One can clearly see the dilemma, that FB2 faces: It can only estimate the durations of the "Fight fire at site A" methods to be executed by FB1 and FB3, and at the same time FB2 knows that time is running out for civilians. If FB2 ventilates site A too early, the fire will spread out of control, whereas if FB2 waits with the ventilation method for too long, fire at site B will become unbearable for the civilians. In general, agents have to perform a sequence of such Figure 1: Civilian rescue domain and a mission plan. Dotted arrows represent implicit precedence constraints within an agent. difficult decisions; in particular, decision process of FB2 involves first choosing when to start ventilating site A, and then (depending on the time it took to ventilate site A), choosing when to start evacuating the civilians from site B. Such sequence of decisions constitutes the policy of an agent, and it must be found fast because time is running out. 3. MODEL DESCRIPTION We encode our decision problems in a model which we refer to as Decentralized MDP with Temporal Constraints 2. Each instance of our decision problems can be described as a tuple (M, A, C, P, R) where M = {mi} | M | i = 1 is the set of methods, and A = {Ak} | A | k = 1 is the set of agents. Agents cannot communicate during mission execution. Each agent Ak is assigned to a set Mk of methods, such that S | A | k = 1 Mk = M and b' i, j; i ~ = jMi fl Mj = ø. Also, each method of agent Ak can be executed only once, and agent Ak can execute only one method at a time. Method execution times are uncertain and P = {pi} | M | i = 1 is the set of distributions of method execution durations. In particular, pi (t) is the probability that the execution of method mi consumes time t. C is a set of temporal constraints in the system. Methods are partially ordered and each method has fixed time windows inside which it can be executed, i.e., C = C ≺ U C [] where C ≺ is the set of predecessor constraints and C [] is the set of time window constraints. For c E C ≺, c = (mi, mj) means that method mi precedes method mj i.e., execution of mj cannot start before mi terminates. In particular, for an agent Ak, all its methods form a chain linked by predecessor constraints. We assume, that the graph G = (M, C ≺) is acyclic, does not have disconnected nodes (the problem cannot be decomposed into independent subproblems), and its source and sink vertices identify the source and sink methods of the system. For c E C [], c = (mi, EST, LET) means that execution of mi can only start after the Earliest Starting Time EST and must finish before the Latest End Time LET; we allow methods to have multiple disjoint time window constraints. Although distributions pi can extend to infinite time horizons, given the time window constraints, the planning horizon Δ = max ~ m, τ, τ ~ ∈ C [] τ is considered as the mission deadline. Finally, R = {ri} | M | i = 1 is the set of non-negative rewards, i.e., ri is obtained upon successful execution of mi. Since there is no communication allowed, an agent can only estimate the probabilities that its methods have already been enabled 2One could also use the OC-DEC-MDP framework, which models both time and resource constraints The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831 by other agents. Consequently, if mj ∈ Mk is the next method to be executed by the agent Ak and the current time is t ∈ [0, Δ], the agent has to make a decision whether to Execute the method mj (denoted as E), or to Wait (denoted as W). In case agent Ak decides to wait, it remains idle for an arbitrary small time ~, and resumes operation at the same place (= about to execute method mj) at time t + ~. In case agent Ak decides to Execute the next method, two outcomes are possible: Success: The agent Ak receives reward rj and moves on to its next method (if such method exists) so long as the following conditions hold: (i) All the methods {mi | ~ mi, mj ~ ∈ C ≺} that directly enable method mj have already been completed, (ii) Execution of method mj started in some time window of method mj, i.e., ∃ ~ mj, τ, τ, ~ ∈ C [] such that t ∈ [τ, τ ~], and (iii) Execution of method mj finished inside the same time window, i.e., agent Ak completed method mj in time less than or equal to τ ~ − t. Failure: If any of the above-mentioned conditions does not hold, agent Ak stops its execution. Other agents may continue their execution, but methods mk ∈ {m | ~ mj, m ~ ∈ C ≺} will never become enabled. The policy πk of an agent Ak is a function πk: Mk × [0, Δ] → {W, E}, and πk (~ m, t ~) = a means, that if Ak is at method m at time t, it will choose to perform the action a. A joint policy π = [πk] | A | k = 1 is considered to be optimal (denoted as π ∗), if it maximizes the sum of expected rewards for all the agents. 4. SOLUTION TECHNIQUES 4.1 Optimal Algorithms Optimal joint policy π ∗ is usually found by using the Bellman update principle, i.e., in order to determine the optimal policy for method mj, optimal policies for methods mk ∈ {m | ~ mj, m ~ ∈ C ≺} are used. Unfortunately, for our model, the optimal policy for method mj also depends on policies for methods mi ∈ {m | ~ m, mj ~ ∈ C ≺}. This double dependency results from the fact, that the expected reward for starting the execution of method mj at time t also depends on the probability that method mj will be enabled by time t. Consequently, if time is discretized, one needs to consider Δ | M | candidate policies in order to find π ∗. Thus, globally optimal algorithms used for solving real-world problems are unlikely to terminate in reasonable time [11]. The complexity of our model could be reduced if we considered its more restricted version; in particular, if each method mj was allowed to be enabled at time points t ∈ Tj ⊂ [0, Δ], the Coverage Set Algorithm (CSA) [1] could be used. However, CSA complexity is double exponential in the size of Ti, and for our domains Tj can store all values ranging from 0 to Δ. 4.2 Locally Optimal Algorithms Following the limited applicability of globally optimal algorithms for DEC-MDPs with Temporal Constraints, locally optimal algorithms appear more promising. Specially, the OC-DEC-MDP algorithm [4] is particularly significant, as it has shown to easily scale up to domains with hundreds of methods. The idea of the OC-DECMDP algorithm is to start with the earliest starting time policy π0 (according to which an agent will start executing the method m as soon as m has a non-zero chance of being already enabled), and then improve it iteratively, until no further improvement is possible. At each iteration, the algorithm starts with some policy π, which uniquely determines the probabilities Pi, [τ, τ,] that method mi will be performed in the time interval [τ, τ ~]. It then performs two steps: Step 1: It propagates from sink methods to source methods the values Vi, [τ, τ,], that represent the expected utility for executing method mi in the time interval [τ, τ ~]. This propagation uses the probabilities Pi, [τ, τ,] from previous algorithm iteration. We call this step a value propagation phase. Step 2: Given the values Vi, [τ, τ,] from Step 1, the algorithm chooses the most profitable method execution intervals which are stored in a new policy π ~. It then propagates the new probabilities Pi, [τ, τ,] from source methods to sink methods. We call this step a probability propagation phase. If policy π ~ does not improve π, the algorithm terminates. There are two shortcomings of the OC-DEC-MDP algorithm that we address in this paper. First, each of OC-DEC-MDP states is a pair ~ mj, [τ, τ ~] ~, where [τ, τ ~] is a time interval in which method mj can be executed. While such state representation is beneficial, in that the problem can be solved with a standard value iteration algorithm, it blurs the intuitive mapping from time t to the expected total reward for starting the execution of mj at time t. Consequently, if some method mi enables method mj, and the values Vj, [τ, τ,] ∀ τ, τ, ∈ [0, Δ] are known, the operation that calculates the values Vi, [τ, τ,] ∀ τ, τ ~ ∈ [0, Δ] (during the value propagation phase), runs in time O (I2), where I is the number of time intervals 3. Since the runtime of the whole algorithm is proportional to the runtime of this operation, especially for big time horizons Δ, the OC - DECMDP algorithm runs slow. Second, while OC-DEC-MDP emphasizes on precise calculation of values Vj, [τ, τ,], it fails to address a critical issue that determines how the values Vj, [τ, τ,] are split given that the method mj has multiple enabling methods. As we show later, OC-DEC-MDP splits Vj, [τ, τ,] into parts that may overestimate Vj, [τ, τ,] when summed up again. As a result, methods that precede the method mj overestimate the value for enabling mj which, as we show later, can have disastrous consequences. In the next two sections, we address both of these shortcomings. 5. VALUE FUNCTION PROPAGATION (VFP) The general scheme of the VFP algorithm is identical to the OCDEC-MDP algorithm, in that it performs a series of policy improvement iterations, each one involving a Value and Probability Propagation Phase. However, instead of propagating separate values, VFP maintains and propagates the whole functions, we therefore refer to these phases as the value function propagation phase and the probability function propagation phase. To this end, for each method mi ∈ M, we define three new functions: Value Function, denoted as vi (t), that maps time t ∈ [0, Δ] to the expected total reward for starting the execution of method mi at time t. Opportunity Cost Function, denoted as Vi (t), that maps time t ∈ [0, Δ] to the expected total reward for starting the execution of method mi at time t assuming that mi is enabled. Probability Function, denoted as Pi (t), that maps time t ∈ [0, Δ] to the probability that method mi will be completed before time t. Such functional representation allows us to easily read the current policy, i.e., if an agent Ak is at method mi at time t, then it will wait as long as value function vi (t) will be greater in the future. Formally: j W if ∃ t,> t such that vi (t) <vi (t ~) πk (~ mi, t ~) = E otherwise. We now develop an analytical technique for performing the value function and probability function propagation phases. 3Similarly for the probability propagation phase 832 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Value Function Propagation Phase Suppose, that we are performing a value function propagation phase during which the value functions are propagated from the sink methods to the source methods. At any time during this phase we encounter a situation shown in Figure 2, where opportunity cost functions [Vjn] Nn = 0 of methods [mjn] Nn = 0 are known, and the opportunity cost Vio of method mio is to be derived. Let pio be the probability distribution function of method mio execution duration, and rio be the immediate reward for starting and completing the execution of method mio inside a time interval [τ, τ'] such that ~ mioτ, τ' ~ ∈ C []. The function Vio is then derived from rio and opportunity costs Vjn, io (t) n = 1,..., N from future methods. Formally: 0 otherwise Note, that fort ∈ [τ, τ'], if h (t): = rio + 'N n = 0 Vjn, io (τ' − t) then Vio is a convolution of p and h: vio (t) = (pio ∗ h) (τ' − t). Assume for now, that Vjn, io represents a full opportunity cost, postponing the discussion on different techniques for splitting the opportunity cost Vjo into [Vjo, ik] Kk = 0 until section 6. We now show how to derive Vjo, io (derivation of Vjn, io for n = ~ 0 follows the same scheme). Figure 2: Fragment of an MDP of agent Ak. Probability functions propagate forward (left to right) whereas value functions propagate backward (right to left). Let Vjo, io (t) be the opportunity cost of starting the execution of method mjo at time t given that method mio has been completed. It is derived by multiplying Vio by the probability functions of all methods other than mio that enable mjo. Formally: Knowing the opportunity cost Vio, we can then easily derive the value function vio. Let Ak be an agent assigned to the method mio. If Ak is about to start the execution of mio it means, that Ak must have completed its part of the mission plan up to the method mio. Since Ak does not know if other agents have completed methods [mlk] k = K k = 1, in order to derive vio, it has to multiply Vio by the probability functions of all methods of other agents that enable mio. Formally: Where the dependency of [Plk] Kk = 1 is also ignored. We have consequently shown a general scheme how to propagate the value functions: Knowing [vjn] Nn = 0 and [Vjn] Nn = 0 of methods [mjn] Nn = 0 we can derive vio and Vio of method mio. In general, the value function propagation scheme starts with sink nodes. It then visits at each time a method m, such that all the methods that m enables have already been marked as visited. The value function propagation phase terminates when all the source methods have been marked as visited. 5.2 Reading the Policy In order to determine the policy of agent Ak for the method mjo we must identify the set Zjo of intervals [z, z'] ⊂ [0,..., Δ], such that: One can easily identify the intervals of Zjo by looking at the time intervals in which the value function vjo does not decrease monotonically. 5.3 Probability Function Propagation Phase Assume now, that value functions and opportunity cost values have all been propagated from sink methods to source nodes and the sets Zj for all methods mj ∈ M have been identified. Since value function propagation phase was using probabilities Pi (t) for methods mi ∈ M and times t ∈ [0, Δ] found at previous algorithm iteration, we now have to find new values Pi (t), in order to prepare the algorithm for its next iteration. We now show how in the general case (Figure 2) propagate the probability functions forward through one method, i.e., we assume that the probability functions [Pik] Kk = 0 of methods [mik] Kk = 0 are known, and the probability function Pjo of method mjo must be derived. Let pjo be the probability distribution function of method mjo execution duration, and Zjo be the set of intervals of inactivity for method mjo, found during the last value function propagation phase. If we ignore the dependency of [Pik] Kk = 0 then the probability Pjo (t) that the execution of method mjo starts before time t is given by: Where similarly to [4] and [5] we ignored the dependency of [Plk] K k = 1. Observe that Vjo, io does not have to be monotonically decreasing, i.e., delaying the execution of the method mio can sometimes be profitable. Therefore the opportunity cost Vjo, io (t) of enabling method mio at time t must be greater than or equal to Vjo, io. Furthermore, Vjo, io should be non-increasing. Formally: We have consequently shown how to propagate the probability functions [Pik] Kk = 0 of methods [mik] Kk = 0 to obtain the probability function Pjo of method mjo. The general, the probability function propagation phase starts with source methods msi for which we know that Psi = 1 since they are enabled by default. We then visit at each time a method m such that all the methods that enable The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 833 m have already been marked as visited. The probability function propagation phase terminates when all the sink methods have been marked as visited. 5.4 The Algorithm Similarly to the OC-DEC-MDP algorithm, VFP starts the policy improvement iterations with the earliest starting time policy π0. Then at each iteration it: (i) Propagates the value functions [vi] | M | i = 1 using the old probability functions [Pi] | M | i = 1 from previous algorithm iteration and establishes the new sets [Zi] | M | i = 1 of method inactivity intervals, and (ii) propagates the new probability functions [Pi ~] | M | i = 1 using the newly established sets [Zi] | M | i = 1. These new functions [Pi ~] | M | i = 1 are then used in the next iteration of the algorithm. Similarly to OC-DEC-MDP, VFP terminates if a new policy does not improve the policy from the previous algorithm iteration. 5.5 Implementation of Function Operations So far, we have derived the functional operations for value function and probability function propagation without choosing any function representation. In general, our functional operations can handle continuous time, and one has freedom to choose a desired function approximation technique, such as piecewise linear [7] or piecewise constant [9] approximation. However, since one of our goals is to compare VFP with the existing OC-DEC - MDP algorithm, that works only for discrete time, we also discretize time, and choose to approximate value functions and probability functions with piecewise linear (PWL) functions. When the VFP algorithm propagates the value functions and probability functions, it constantly carries out operations represented by equations (1) and (3) and we have already shown that these operations are convolutions of some functions p (t) and h (t). If time is discretized, functions p (t) and h (t) are discrete; however, h (t) can be nicely approximated with a PWL function bh (t), which is exactly what VFP does. As a result, instead of performing O (Δ2) multiplications to compute f (t), VFP only needs to perform O (k · Δ) multiplications to compute f (t), where k is the number of linear segments of bh (t) (note, that since h (t) is monotonic, bh (t) is usually close to h (t) with k ~ Δ). Since Pi values are in range [0, 1] and Vi values are in range [0, Pmi ∈ M ri], we suggest to approximate Vi (t) with bVi (t) within error ~ V, and Pi (t) with bPi (t) within error ~ P. We now prove that the overall approximation error accumulated during the value function propagation phase can be expressed in terms of ~ P and ~ V: THEOREM 1. Let C ≺ be a set of precedence constraints of a DEC-MDP with Temporal Constraints, and ~ P and ~ V be the probability function and value function approximation errors respectively. The overall error ~ π = maxV supt ∈ [0, Δ] | V (t) − Vb (t) | of value function propagation phase is then bounded by: PROOF. In order to establish the bound for ~ π, we first prove by induction on the size of C ≺, that the overall error of probability function propagation phase, ~ π (P) = maxP supt ∈ [0, Δ] | P (t) − Pb (t) | is bounded by (1 + ~ P) | C ≺ | − 1. Induction base: If n = 1 only two methods are present, and we will perform the operation identified by Equation (3) only once, introducing the error ~ π (P) = ~ P = (1 + ~ P) | C ≺ | − 1. Induction step: Suppose, that ~ π (P) for | C ≺ | = n is bounded by (1 + ~ P) n − 1, and we want to prove that this statement holds for | C ≺ | = n. Let G = ~ M, C ≺ ~ be a graph with at most n + 1 edges, and G = ~ M, C ≺ ~ be a subgraph of G, such that C ≺ = C ≺ − {~ mi, mj ~}, where mj ∈ M is a sink node in G. From the induction assumption we have, that C ≺ introduces the probability propagation phase error bounded by (1 + ~ P) n − 1. We now add back the link {~ mi, mj ~} to C ≺, which affects the error of only one probability function, namely Pj, by a factor of (1 + ~ P). Since probability propagation phase error in C ≺ was bounded by (1 + ~ P) n − 1, in C ≺ = C ≺ ∪ {~ mi, mj ~} it can be at most ((1 + functions are not overestimated, they are bounded by P and the error of a single value function propagation operation will be at most 6. SPLITTING THE OPPORTUNITY COST FUNCTIONS In section 5 we left out the discussion about how the opportunity cost function Vjo of method mjo is split into opportunity cost functions [Vjo, ik] Kk = 0 sent back to methods [mik] Kk = 0, that directly enable method mjo. So far, we have taken the same approach as in [4] and [5] in that the opportunity cost function Vjo, ik that the method mik sends back to the method mjo is a minimal, non-increasing function that dominates function Vjo, ik (t) = (Vjo · Qk ~ ∈ {0,..., K} Pik ~) (t). We refer to this approach, as heurisk ~ ~ = k tic H ~ 1,1 ~. Before we prove that this heuristic overestimates the opportunity cost, we discuss three problems that might occur when splitting the opportunity cost functions: (i) overestimation, (ii) underestimation and (iii) starvation. Consider the situation in Figure Figure 3: Splitting the value function of method mjo among methods [mik] Kk = 0. (3) when value function propagation for methods [mik] Kk = 0 is performed. For each k = 0,..., K, Equation (1) derives the opportunity cost function Vik from immediate reward rk and opportunity cost function Vjo, ik. If m0 is the only methods that precedes method mk, then Vik,0 = Vik is propagated to method m0, and consequently the opportunity cost for completing the method m0 at time t is equal to PK k = 0 Vik,0 (t). If this cost is overestimated, then an agent A0 at method m0 will have too much incentive to finish the execution of m0 at time t. Consequently, although the probability P (t) that m0 will be enabled by other agents by time t is low, agent A0 might still find the expected utility of starting the execution of m0 at time t higher than the expected utility of doing it later. As a result, it will choose at time t to start executing method m0 instead of waiting, which can have disastrous consequences. Similarly, if PKk = 0 Vik,0 (t) is underestimated, agent A0 might loose interest in enabling the future methods [mik] Kk = 0 and just focus on 834 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) maximizing the chance of obtaining its immediate reward r0. Since this chance is increased when agent A0 waits4, it will consider at time t to be more profitable to wait, instead of starting the execution of m0, which can have similarly disastrous consequences. Finally, if Vj0 is split in a way, that for some k, Vj0, ik = 0, it is the method mik that underestimates the opportunity cost of enabling method mj0, and the similar reasoning applies. We call such problem a starvation of method mk. That short discussion shows the importance of splitting the opportunity cost function Vj0 in such a way, that overestimation, underestimation, and starvation problem is avoided. We now prove that: Consequently, the opportunity cost PKk = 0 Vik (t0) of starting the execution of methods [mik] Kk = 0 at time t E [0,. . , Δ] is greater than the opportunity cost Vj0 (t0) which proves the theorem.Figure 4 shows that the overestimation of opportunity cost is easily observable in practice. To remedy the problem of opportunity cost overestimation, we propose three alternative heuristics that split the opportunity cost functions: • Heuristic H ~ 1,0 ~: Only one method, mik gets the full expected reward for enabling method mj0, i.e., Vj0, ik ~ (t) = 0 • Heuristic bH ~ 1,1 ~: This is a normalized version of the H ~ 1,1 ~ heuristic in that each method [mik] Kk = 0 initially gets the full opportunity cost for enabling the method mj0. To avoid opportunity cost overestimation, we normalize the split functions when their sum exceeds the opportunity cost function to be split. Formally: For the new heuristics, we now prove, that: For heuristic bH ~ 1,1 ~, the opportunity cost function Vj0 is by definition split in such manner, that PKk = 0 Vik (t) <Vj0 (t). Consequently, we have proved, that our new heuristics H ~ 1,0 ~, H ~ 1/2,1 / 2 ~ and bH ~ 1,1 ~ avoid the overestimation of the opportunity cost. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835 The reason why we have introduced all three new heuristics is the following: Since H (1,1) overestimates the opportunity cost, one has to choose which method mik will receive the reward from enabling the method mj0, which is exactly what the heuristic H (1,0) does. However, heuristic H (1,0) leaves K − 1 methods that precede the method mj0 without any reward which leads to starvation. Starvation can be avoided if opportunity cost functions are split using heuristic H (1/2,1 / 2), that provides reward to all enabling methods. However, the sum of split opportunity cost functions for the H (1/2,1 / 2) heuristic can be smaller than the non-zero split opportunity cost function for the H (1,0) heuristic, which is clearly undesirable. Such situation (Figure 4, heuristic H (1,0)) occurs because the mean f + g 2 of two functions f, g is not smaller than f nor g only if f = g. This is why we have proposed the bH (1,1) heuristic, which by definition avoids the overestimation, underestimation and starvation problems. 7. EXPERIMENTAL EVALUATION Since the VFP algorithm that we introduced provides two orthogonal improvements over the OC-DEC-MDP algorithm, the experimental evaluation we performed consisted of two parts: In part 1, we tested empirically the quality of solutions that an locally optimal solver (either OC-DEC-MDP or VFP) finds, given it uses different opportunity cost function splitting heuristic, and in part 2, we compared the runtimes of the VFP and OC-DEC - MDP algorithms for a variety of mission plan configurations. Part 1: We first ran the VFP algorithm on a generic mission plan configuration from Figure 3 where only methods mj0, mi1, mi2 and m0 were present. Time windows of all methods were set to 400, duration pj0 of method mj0 was uniform, i.e., pj0 (t) = 1 and durations pi1, pi2 of methods mi1, mi2 were normal distributions, i.e., pi1 = N (μ = 250, σ = 20), and pi2 = N (μ = 200, σ = 100). We assumed that only method mj0 provided reward, i.e. rj0 = 10 was the reward for finishing the execution of method mj0 before time t = 400. We show our results in Figure (4) where the x-axis of each of the graphs represents time whereas the y-axis represents the opportunity cost. The first graph confirms, that when the opportunity cost function Vj0 was split into opportunity cost functions Vi1 and Vi2 using the H (1,1) heuristic, the function Vi1 + Vi2 was not always below the Vj0 function. In particular, Vi1 (280) + Vi2 (280) exceeded Vj0 (280) by 69%. When heuristics H (1,0), H (1/2,1 / 2) and bH (1,1) were used (graphs 2,3 and 4), the function Vi1 + Vi2 was always below Vj0. We then shifted our attention to the civilian rescue domain introduced in Figure 1 for which we sampled all action execution durations from the normal distribution N = (μ = 5, σ = 2)). To obtain the baseline for the heuristic performance, we implemented a globally optimal solver, that found a true expected total reward for this domain (Figure (6a)). We then compared this reward with a expected total reward found by a locally optimal solver guided by each of the discussed heuristics. Figure (6a), which plots on the y-axis the expected total reward of a policy complements our previous results: H (1,1) heuristic overestimated the expected total reward by 280% whereas the other heuristics were able to guide the locally optimal solver close to a true expected total reward. Part 2: We then chose H (1,1) to split the opportunity cost functions and conducted a series of experiments aimed at testing the scalability of VFP for various mission plan configurations, using the performance of the OC-DEC-MDP algorithm as a benchmark. We began the VFP scalability tests with a configuration from Figure (5a) associated with the civilian rescue domain, for which method execution durations were extended to normal distributions N (μ = Figure 5: Mission plan configurations: (a) civilian rescue domain, (b) chain of n methods, (c) tree of n methods with branching factor = 3 and (d) square mesh of n methods. Figure 6: VFP performance in the civilian rescue domain. 30, σ = 5), and the deadline was extended to Δ = 200. We decided to test the runtime of the VFP algorithm running with three different levels of accuracy, i.e., different approximation parameters ~ P and ~ V were chosen, such that the cumulative error of the solution found by VFP stayed within 1%, 5% and 10% of the solution found by the OC - DEC-MDP algorithm. We then run both algorithms for a total of 100 policy improvement iterations. Figure (6b) shows the performance of the VFP algorithm in the civilian rescue domain (y-axis shows the runtime in milliseconds). As we see, for this small domain, VFP runs 15% faster than OCDEC-MDP when computing the policy with an error of less than 1%. For comparison, the globally optimal solved did not terminate within the first three hours of its runtime which shows the strength of the opportunistic solvers, like OC-DEC-MDP. We next decided to test how VFP performs in a more difficult domain, i.e., with methods forming a long chain (Figure (5b)). We tested chains of 10, 20 and 30 methods, increasing at the same time method time windows to 350, 700 and 1050 to ensure that later methods can be reached. We show the results in Figure (7a), where we vary on the x-axis the number of methods and plot on the y-axis the algorithm runtime (notice the logarithmic scale). As we observe, scaling up the domain reveals the high performance of VFP: Within 1% error, it runs up to 6 times faster than OC-DECMDP. We then tested how VFP scales up, given that the methods are arranged into a tree (Figure (5c)). In particular, we considered trees with branching factor of 3, and depth of 2, 3 and 4, increasing at the same time the time horizon from 200 to 300, and then to 400. We show the results in Figure (7b). Although the speedups are smaller than in case of a chain, the VFP algorithm still runs up to 4 times faster than OC-DEC-MDP when computing the policy with an error of less than 1%. We finally tested how VFP handles the domains with methods arranged into a n × n mesh, i.e., C _ = {(mi, j, mk, j +1)} for i = 1,..., n; k = 1,..., n; j = 1,..., n − 1. In particular, we consider 836 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 4: Visualization of heuristics for opportunity costs splitting. Figure 7: Scalability experiments for OC-DEC-MDP and VFP for different network configurations. meshes of 3 × 3, 4 × 4, and 5 × 5 methods. For such configurations we have to greatly increase the time horizon since the probabilities of enabling the final methods by a particular time decrease exponentially. We therefore vary the time horizons from 3000 to 4000, and then to 5000. We show the results in Figure (7c) where, especially for larger meshes, the VFP algorithm runs up to one order of magnitude faster than OC-DEC-MDP while finding a policy that is within less than 1% from the policy found by OC - DECMDP. 8. CONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains. In this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP. In terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11]. Unfortunately, they fail to scaleup to large-scale domains at present time. Beyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints. Finally, value function techniques have been studied in context of single agent MDPs [7] [9]. However, similarly to [6], they fail to address the lack of global state knowledge, which is a fundamental issue in decentralized planning. On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints ABSTRACT Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. 1. INTRODUCTION The development of algorithms for effective coordination of multiple agents acting as a team in uncertain and time critical domains has recently become a very active research field with potential applications ranging from coordination of agents during a hostage rescue mission [11] to the coordination of Autonomous Mars Explo ration Rovers [2]. Because of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time. Key decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs). Unfortunately, solving these models optimally has been proven to be NEXP-complete [3], hence more tractable subclasses of these models have been the subject of intensive research. In particular, Network Distributed POMDP [13] which assume that not all the agents interact with each other, Transition Independent DEC-MDP [2] which assume that transition function is decomposable into local transition functions or DEC-MDP with Event Driven Interactions [1] which assume that interactions between agents happen at fixed time points constitute good examples of such subclasses. Although globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks. To remedy that, locally optimal algorithms have been proposed [12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons. Additionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains. OC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration. However, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values. The reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval. Furthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods. This reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies. In this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP. VFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain and manipulate a value function over time for each method rather than a separate value for each pair of method and time interval. Such representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions. Second, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem. This paper is organized as follows: In section 2 we motivate this research by introducing a civilian rescue domain where a team of fire - brigades must coordinate in order to rescue civilians trapped in a burning building. In section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers. Sections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements. Finally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm 2. MOTIVATING EXAMPLE 3. MODEL DESCRIPTION The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831 4. SOLUTION TECHNIQUES 4.1 Optimal Algorithms 4.2 Locally Optimal Algorithms 5. VALUE FUNCTION PROPAGATION (VFP) 832 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Value Function Propagation Phase 0 otherwise 5.2 Reading the Policy 5.3 Probability Function Propagation Phase The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 833 5.4 The Algorithm 5.5 Implementation of Function Operations 6. SPLITTING THE OPPORTUNITY COST FUNCTIONS 834 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835 7. EXPERIMENTAL EVALUATION 836 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 8. CONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains. In this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP. In terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11]. Unfortunately, they fail to scaleup to large-scale domains at present time. Beyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints. Finally, value function techniques have been studied in context of single agent MDPs [7] [9]. However, similarly to [6], they fail to address the lack of global state knowledge, which is a fundamental issue in decentralized planning. On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints ABSTRACT Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. 1. INTRODUCTION ration Rovers [2]. Because of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time. Key decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs). Although globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks. To remedy that, locally optimal algorithms have been proposed [12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons. Additionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains. OC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration. However, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values. The reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval. Furthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods. This reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies. In this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP. VFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain and manipulate a value function over time for each method rather than a separate value for each pair of method and time interval. Such representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions. Second, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem. In section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers. Sections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements. Finally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm 8. CONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains. In this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP. In terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11]. Unfortunately, they fail to scaleup to large-scale domains at present time. Beyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints. Finally, value function techniques have been studied in context of single agent MDPs [7] [9]. I-55 Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory It is well established by conflict theorists and others that successful negotiation should incorporate creating value as well as claiming value. Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. [ "autom negoti", "negoti", "creat valu", "claim valu", "mediat", "ineffici compromis", "dilemma", "concess", "deadlock situat", "uncertainti", "incomplet inform", "mcdm", "integr negoti", "multi-criterion decis make" ] [ "P", "P", "P", "P", "P", "U", "U", "U", "U", "U", "M", "U", "M", "M" ] Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory Quoc Bao Vo School of Computer Science and IT RMIT University, Australia vqbao@cs.rmit.edu.au Lin Padgham School of Computer Science and IT RMIT University, Australia linpa@cs.rmit.edu.au ABSTRACT It is well established by conflict theorists and others that successful negotiation should incorporate creating value as well as claiming value. Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems; K.4.4 [Computers and Society]: Electronic Commerce General Terms Algorithms, Design 1. INTRODUCTION Given that negotiation is perhaps one of the oldest activities in the history of human communication, its perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21]. Raiffa [17] and Sebenius [20] provide analyses on the negotiators'' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons. According to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as creating value and claiming value. They argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension. Negotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation. They refer to this problem as the Negotiators Dilemma. We argue that the Negotiators Dilemma is essentially informationbased, due to the private information held by the agents. Such private information contains both the information that implies the agents bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength. For instance, when bargaining to sell a house to a potential buyer, the seller would try to hide her actual reserve price as much as possible for she hopes to reach an agreement at a much higher price than her reserve price. On the other hand, the outside options available to her (e.g. other buyers who have expressed genuine interest with fairly good offers) consist in the information that improves her bargaining strength about which she would like to convey to her opponent. But at the same time, her opponent is well aware of the fact that it is her incentive to boost her bargaining strength and thus will not accept every information she sends out unless it is substantiated by evidence. Coming back to the Negotiators Dilemma, its not always possible to separate the integrative bargaining process from the distributive bargaining process. In fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process. This is because a negotiator could use the information about his opponents interests against her during the distributive negotiation process. That is, a negotiator may refuse to concede on an important conflicting issue by claiming that he has made a major concession (on another issue) to meet his opponents interests even though the concession he made could be insignificant to him. For instance, few buyers would start a bargaining with a dealer over a deal for a notebook computer by declaring that he is most interested in an extended warranty for the item and therefore prepared to pay a high price to get such an extended warranty. Negotiation Support Systems (NSSs) and negotiating software 508 978-81-904262-7-5 (RPS) c 2007 IFAAMAS agents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]). However, because of the Negotiators Dilemma and given even bargaining power and incomplete information, the following two undesirable situations often arise: (i) negotiators reach inefficient compromises, or (ii) negotiators engage in a deadlock situation in which both negotiators refuse to act upon with incomplete information and at the same time do not want to disclose more information. In this paper, we argue for the role of a mediator to resolve the above two issues. The mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device. To take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed. Section 2 provides background on MCDM theory and the negotiation framework. Section 3 formulates the problem. In Section 4, we discuss our approach to integrative negotiation. Section 5 discusses the future work with some concluding remarks. 2. BACKGROUND 2.1 Multi-criteria decision making theory Let A denote the set of feasible alternatives available to a decision maker M. As an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1, ... , m). (Attributes are also referred to as issues, or decision variables.) A typical decision maker also has several objectives X1, ... , Xk. We assume that Xi, (i = 1, ... , k), maps the alternatives to real numbers. Thus, a tuple (x1, ... , xk) = (X1(a), ... , Xk(a)) denotes the consequence of the act a to the decision maker M. By definition, objectives are statements that delineate the desires of a decision maker. Thus, M wishes to maximise his objectives. However, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision makers objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another. For instance, most businesses and public services have objectives like minimise cost and maximise the quality of services. Since better services can often only be attained for a price, these objectives conflict. Due to the conflicting nature of a decision makers objectives, M usually has to settle at a compromise solution. That is, he may have to choose an act a ∈ A that does not optimise every objective. This is the topic of the multi-criteria decision making theory. Part of the solution to this problem is that M has to try to identify the Pareto frontier in the consequence space {(X1(a), ... , Xk(a))}a∈A. DEFINITION 1. (Dominant) Let x = (x1, ... , xk) and x = (x1, ... , xk) be two consequences. x dominates x iff xi > xi for all i, and the inequality is strict for at least one i. The Pareto frontier in a consequence space then consists of all consequences that are not dominated by any other consequence. This is illustrated in Fig. 1 in which an alternative consists of two attributes d1 and d2 and the decision maker tries to maximise the two objectives X1 and X2. A decision a ∈ A whose consequence does not lie on the Pareto frontier is inefficient. While the Pareto 1x d2 a (X (a),X (a)) d1 1 x2 2 Alternative spaceA Pareto frontier Consequence space optimal consequenc Figure 1: The Pareto frontier frontier allows M to avoid taking inefficient decisions, M still has to decide which of the efficient consequences on the Pareto frontier is most preferred by him. MCDM theorists introduce a mechanism to allow the objective components of consequences to be normalised to the payoff valuations for the objectives. Consequences can then be ordered: if the gains in satisfaction brought about by C1 (in comparison to C2) equals to the losses in satisfaction brought about by C1 (in comparison to C2), then the two consequences C1 and C2 are considered indifferent. M can now construct the set of indifference curves1 in the consequence space (the dashed curves in Fig. 1). The most preferred indifference curve that intersects with the Pareto frontier is in focus: its intersection with the Pareto frontier is the sought after consequence (i.e., the optimal consequence in Fig. 1). 2.2 A negotiation framework A multi-agent negotiation framework consists of: 1. A set of two negotiating agents N = {1, 2}. 2. A set of attributes Att = {α1, ... , αm} characterising the issues the agents are negotiating over. Each attribute α can take a value from the set V alα; 3. A set of alternative outcomes O. An outcome o ∈ O is represented by an assignment of values to the corresponding attributes in Att. 4. Agents'' utility: Based on the theory of multiple-criteria decision making [8], we define the agents'' utility as follows: • Objectives: Agent i has a set of ni objectives, or interests; denoted by j (j = 1, ... , ni). To measure how much an outcome o fulfills an objective j to an agent i, we use objective functions: for each agent i, we define is interests using the objective vector function fi = [fij ] : O → Rni . • Value functions: Instead of directly evaluating an outcome o, agent i looks at how much his objectives are fulfilled and will make a valuation based on these more basic criteria. Thus, for each agent i, there is a value function σi : Rni → R. In particular, Raiffa [17] shows how to systematically construct an additive value function to each party involved in a negotiation. • Utility: Now, given an outcome o ∈ O, an agent i is able to determine its value, i.e., σi(fi(o)). However, a negotiation infrastructure is usually required to facilitate negotiation. This might involve other mechanisms and factors/parties, e.g., a mediator, a legal institution, participation fees, etc.. The standard way to implement such a thing is to allow money 1 In fact, given the k-dimensional space, these should be called indifference surfaces. However, we will not bog down to that level of details. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509 and side-payments. In this paper, we ignore those side-effects and assume that agent is utility function ui is normalised so that ui : O → [0, 1]. EXAMPLE 1. There are two agents, A and B. Agent A has a task T that needs to be done and also 100 units of a resource R. Agent B has the capacity to perform task T and would like to obtain at least 10 and at most 20 units of the resource R. Agent B is indifferent on any amount between 10 and 20 units of the resource R. The objective functions for both agents A and B are cost and revenue. And they both aim at minimising costs while maximising revenues. Having T done generates for A a revenue rA,T while doing T incurs a cost cB,T to B. Agent B obtains a revenue rB,R for each unit of the resource R while providing each unit of the resource R costs agent A cA,R. Assuming that money transfer between agents is possible, the set Att then contains three attributes: • T, taking values from the set {0, 1}, indicates whether the task T is assigned to agent B; • R, taking values from the set of non-negative integer, indicates the amount of resource R being allocated to agent B; and • MT, taking values from R, indicates the payment p to be transferred from A to B. Consider the outcome o = [T = 1, R = k, MT = p], i.e., the task T is assigned to B, and A allocates to B with k units of the resource R, and A transfers p dollars to B. Then, costA(o) = k.cA,R + p and revA(o) = rA,T ; and costB(o) = cB,T and revA(o) = j k.rB,R + p if 10 ≤ k ≤ 20 p otherwise. And, σi(costi(o), revi(o)) = revi(o) − costi(o), (i = A, B). 3. PROBLEM FORMALISATION Consider Example 1, assume that rA,T =$150 and cB,T = $100 and rB,R =$10 and cA,R = $7. That is, the revenues generated for A exceeds the costs incurred to B to do task T, and B values resource R more highly than the cost for A to provide it. The optimal solution to this problem scenario is to assign task T to agent B and to allocate 20 units of resource R (i.e., the maximal amount of resource R required by agent B) from agent A to agent B. This outcome regarding the resource and task allocation problems leaves payoffs of$10 to agent A and $100 to agent B.2 Any other outcome would leave at least one of the agents worse off. In other words, the presented outcome is Pareto-efficient and should be part of the solution outcome for this problem scenario. However, as the agents still have to bargain over the amount of money transfer p, neither agent would be willing to disclose their respective costs and revenues regarding the task T and the resource R. As a consequence, agents often do not achieve the optimal outcome presented above in practice. To address this issue, we introduce a mediator to help the agents discover better agreements than the ones they might try to settle on. Note that this problem is essentially the problem of searching for joint gains in a multilateral negotiation in which the involved parties hold strategic information, i.e., the integrative part in a negotiation. In order to help facilitate this process, we introduce the role of a neutral mediator. Before formalising the decision problems faced by the mediator and the 2 Certainly, without money transfer to compensate agent A, this outcome is not a fair one. negotiating agents, we discuss the properties of the solution outcomes to be achieved by the mediator. In a negotiation setting, the two typical design goals would be: • Efficiency: Avoid the agents from settling on an outcome that is not Pareto-optimal; and • Fairness: Avoid agreements that give the most of the gains to a subset of agents while leaving the rest with too little. The above goals are axiomatised in Nashs seminal work [16] on cooperative negotiation games. Essentially, Nash advocates for the following properties to be satisfied by solution to the bilateral negotiation problem: (i) it produces only Pareto-optimal outcomes; (ii) it is invariant to affine transformation (to the consequence space); (iii) it is symmetric; and (iv) it is independent from irrelevant alternatives. A solution satisfying Nashs axioms is called a Nash bargaining solution. It then turns out that, by taking the negotiators'' utilities as its objectives the mediator itself faces a multi-criteria decision making problem. The issues faced by the mediator are: (i) the mediator requires access to the negotiators'' utility functions, and (ii) making (fair) tradeoffs between different agents'' utilities. Our methods allow the agents to repeatedly interact with the mediator so that a Nash solution outcome could be found by the parties. Informally, the problem faced by both the mediator and the negotiators is construction of the indifference curves. Why are the indifference curves so important? • To the negotiators, knowing the options available along indifference curves opens up opportunities to reach more efficient outcomes. For instance, consider an agent A who is presenting his opponent with an offer θA which she refuses to accept. Rather than having to concede, A could look at his indifference curve going through θA and choose another proposal θA. To him, θA and θA are indifferent but θA could give some gains to B and thus will be more acceptable to B. In other words, the outcome θA is more efficient than θA to these two negotiators. • To the mediator, constructing indifference curves requires a measure of fairness between the negotiators. The mediator needs to determine how much utility it needs to take away from the other negotiators to give a particular negotiator a specific gain G (in utility). In order to search for integrative solutions within the outcome space O, we characterise the relationship between the agents over the set of attributes Att. As the agents hold different objectives and have different capacities, it may be the case that changing between two values of a specific attribute implies different shifts in utility of the agents. However, the problem of finding the exact Paretooptimal set3 is NP-hard [2]. Our approach is thus to solve this optimisation problem in two steps. In the first steps, the more manageable attributes will be solved. These are attributes that take a finite set of values. The result of this step would be a subset of outcomes that contains the Pareto-optimal set. In the second step, we employ an iterative procedure that allows the mediator to interact with the negotiators to find joint improvements that move towards a Pareto-optimal outcome. This approach will not work unless the attributes from Att 3 The Pareto-optimal set is the set of outcomes whose consequences (in the consequence space) correspond to the Pareto frontier. 510 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) are independent. Most works on multi-attribute, or multi-issue, negotiation (e.g. [17]) assume that the attributes or the issues are independent, resulting in an additive value function for each agent.4 ASSUMPTION 1. Let i ∈ N and S ⊆ Att. Denote by ¯S the set Att \ S. Assume that vS and vS are two assignments of values to the attributes of S and v1 ¯S, v2 ¯S are two arbitrary value assignments to the attributes of ¯S, then (ui([vS, v1 ¯S]) − ui([vS, v2 ¯S])) = (ui([vS, v1 ¯S])−ui([vS, v2 ¯S])). That is, the utility function of agent i will be defined on the attributes from S independently of any value assignment to other attributes. 4. MEDIATOR-BASED BILATERAL NEGOTIATIONS As discussed by Lax and Sebenius [13], under incomplete information the tension between creating and claiming values is the primary cause of inefficient outcomes. This can be seen most easily in negotiations involving two negotiators; during the distributive phase of the negotiation, the two negotiatorss objectives are directly opposing each other. We will now formally characterise this relationship between negotiators by defining the opposition between two negotiating parties. The following exposition will be mainly reproduced from [9]. Assuming for the moment that all attributes from Att take values from the set of real numbers R, i.e., V alj ⊆ R for all j ∈ Att. We further assume that the set O = ×j∈AttV alj of feasible outcomes is defined by constraints that all parties must obey and O is convex. Now, an outcome o ∈ O is just a point in the m-dimensional space of real numbers. Then, the questions are: (i) from the point of view of an agent i, is o already the best outcome for i? (ii) if o is not the best outcome for i then is there another outcome o such that o gives i a better utility than o and o does not cause a utility loss to the other agent j in comparison to o? The above questions can be answered by looking at the directions of improvement of the negotiating parties at o, i.e., the directions in the outcome space O into which their utilities increase at point o. Under the assumption that the parties'' utility functions ui are differentiable concave, the set of all directions of improvement for a party at a point o can be defined in terms of his most preferred, or gradient, direction at that point. When the gradient direction ∇ui(o) of agent i at point o is outright opposing to the gradient direction ∇uj (o) of agent j at point o then the two parties strongly disagree at o and no joint improvements can be achieved for i and j in the locality surrounding o. Since opposition between the two parties can vary considerably over the outcome space (with one pair of outcomes considered highly antagonistic and another pair being highly cooperative), we need to describe the local properties of the relationship. We begin with the opposition at any point of the outcome space Rm . The following definition is reproduced from [9]: DEFINITION 2. 1. The parties are in local strict opposition at a point x ∈ Rm iff for all points x ∈ Rm that are sufficiently close to x (i.e., for some > 0 such that ∀x x −x < ), an increase of one utility can be achieved only at the expense of a decrease of the other utility. 2. The parties are in local non-strict opposition at a point x ∈ Rm iff they are not in local strict opposition at x, i.e., iff it is possible for both parties to raise their utilities by moving an infinitesimal distance from x. 4 Klein et al. [10] explore several implications of complex contracts in which attributes are possibly inter-dependent. 3. The parties are in local weak opposition at a point x ∈ Rm iff ∇u1(x). ∇u2(x) ≥ 0, i.e., iff the gradients at x of the two utility functions form an acute or right angle. 4. The parties are in local strong opposition at a point x ∈ Rm iff ∇u1(x). ∇u2(x) < 0, i.e., iff the gradients at x form an obtuse angle. 5. The parties are in global strict (nonstrict, weak, strong) opposition iff for every x ∈ Rm they are in local strict (nonstrict, weak, strong) opposition. Global strict and nonstrict oppositions are complementary cases. Essentially, under global strict opposition the whole outcome space O becomes the Pareto-optimal set as at no point in O can the negotiating parties make a joint improvement, i.e., every point in O is a Pareto-efficient outcome. In other words, under global strict opposition the outcome space O can be flattened out into a single line such that for each pair of outcomes x, y ∈ O, u1(x) < u1(y) iff u2(x) > u2(y), i.e., at every point in O, the gradient of the two utility functions point to two different ends of the line. Intuitively, global strict opposition implies that there is no way to obtain joint improvements for both agents. As a consequence, the negotiation degenerates to a distributive negotiation, i.e., the negotiating parties should try to claim as much shares from the negotiation issues as possible while the mediator should aim for the fairness of the division. On the other hand, global nonstrict opposition allows room for joint improvements and all parties might be better off trying to realise the potential gains by reaching Pareto-efficient agreements. Weak and strong oppositions indicate different levels of opposition. The weaker the opposition, the more potential gains can be realised making cooperation the better strategy to employ during negotiation. On the other hand, stronger opposition suggests that the negotiating parties tend to behave strategically leading to misrepresentation of their respective objectives and utility functions and making joint gains more difficult to realise. We have been temporarily making the assumption that the outcome space O is the subset of Rm . In many real-world negotiations, this assumption would be too restrictive. We will continue our exposition by lifting this restriction and allowing discrete attributes. However, as most negotiations involve only discrete issues with a bounded number of options, we will assume that each attribute takes values either from a finite set or from the set of real numbers R. In the rest of the paper, we will refer to attributes whose values are from finite sets as simple attributes and attributes whose values are from R as continuous attributes. The notions of local oppositions, i.e., strict, nonstrict, weak and strong, are not applicable to outcome spaces that contain simple attributes and nor are the notions of global weak and strong oppositions. However, the notions of global strict and nonstrict oppositions can be generalised for outcome spaces that contain simple attributes. DEFINITION 3. Given an outcome space O, the parties are in global strict opposition iff ∀x, y ∈ O, u1(x) < u1(y) iff u2(x) > u2(y). The parties are in global nonstrict opposition if they are not in global strict opposition. 4.1 Optimisation on simple attributes In order to extract the optimal values for a subset of attributes, in the first step of this optimisation process the mediator requests the negotiators to submit their respective utility functions over the set of simple attributes. Let Simp ⊆ Att denote the set of all simple attributes from Att. Note that, due to Assumption 1, agent is The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511 utility function can be characterised as follows: ui([vSimp, vSimp]) = wi 1 ∗ ui,1([vSimp]) + wi 2 ∗ ui,2([vSimp]), where Simp = Att \ Simp, and ui,1 and ui,2 are the utility components of ui over the sets of attributes Simp and Simp, respectively, and 0 < wi 1, wi 2 < 1 and wi 1 + wi 2 = 1. As attributes are independent of each other regarding the agents'' utility functions, the optimisation problem over the attributes from Simp can be carried out by fixing ui([vSimp]) to a constant C, and then search for the optimal values within the set of attributes Simp. Now, how does the mediator determine the optimal values for the attributes in Simp? Several well-known optimisation strategies could be applicable here: • The utilitarian solution: The sum of the agents'' utilities are maximised. Thus, the optimal values are the solution of the following optimisation problem: arg max v∈V alSimp X i∈N ui(v) • The Nash solution: The product of the agents'' utilities are maximised. Thus, the optimal values are the solution of the following optimisation problem: arg max v∈V alSimp Y i∈N ui(v) • The egalitarian solution (aka. the maximin solution): The utility of the agent with minimum utility is maximised. Thus, the optimal values are the solution of the following optimisation problem: arg max v∈V alSimp min i∈N ui(v) The question now is of course whether a negotiator has the incentive to misrepresent his utility function. First of all, recall that the agents'' utility functions are bounded, i.e., ∀o ∈ O.0 ≤ ui(o) ≤ 1. Thus, the agents have no incentive to overstate their utility regarding an outcome o: If o is the most preferred outcome to an agent i then he already assigns the maximal utility to o. On the other hand, if o is not the most preferred outcome to i then by overstating the utility he assigns to o, the agent i runs the risk of having to settle on an agreement which would give him less payoffs than he is supposed to receive. However, agents do have an incentive to understate their utility if the final settlement will be based on the above solutions alone. Essentially, the mechanism to avoid an agent to understate his utility regarding particular outcomes is to guarantee a certain measure of fairness for the final settlement. That is, the agents lose the incentive to be dishonest to obtain gains from taking advantage of the known solutions to determine the settlement outcome for they would be offset by the fairness maintenance mechanism. Firsts, we state an easy lemma. LEMMA 1. When Simp contains one single attributes, the agents have the incentive to understate their utility functions regarding outcomes that are not attractive to them. By way of illustration, consider the set Simp containing only one attribute that could take values from the finite set {A, B, C, D}. Assume that negotiator 1 assigns utilities of 0.4, 0.7, 0.9, and 1 to A, B, C, and D, respectively. Assume also that negotiator 2 assigns utilities of 1, 0.9, 0.7, and 0.4 to A, B, C, and D, respectively. If agent 1 misrepresents his utility function to the mediator by reporting utility 0 for all values A, B and C and utility 1 for value D then the agent 2 who plays honestly in his report to the mediator will obtain the worst outcome D given any of the above solutions. Note that agent 1 doesnt need to know agent 2s utility function, nor does he need to know the strategy employed by agent 2. As long as he knows that the mediator is going to employ one of the above three solutions, then the above misrepresentation is the dominant strategy for this game. However, when the set Simp contains more than one attribute and none of the attributes strongly dominate the other attributes then the above problem disminishes by itself thanks to the integrative solution. We of course have to define clearly what it means for an attribute to strongly dominate other attributes. Intuitively, if most of an agents utility concentrates on one of the attributes then this attribute strongly dominates other attributes. We again appeal to the Assumption 1 on additivity of utility functions to achieve a measure of fairness within this negotiation setting. Due to Assumption 1, we can characterise agent is utility component over the set of attributes Simp by the following equation: ui,1([vSimp]) = X j∈Simp wi j ∗ ui,j([vj]) (1) where P j∈Simp wj = 1. Then, an attribute ∈ Simp strongly dominates the rest of the attributes in Simp (for agent i) iff wi > P j∈(Simp− ) wi j . Attribute is said to be strongly dominant (for agent i) wrt. the set of simple attributes Simp. The following theorem shows that if the set of attributes Simp does not contain a strongly dominant attribute then the negotiators have no incentive to be dishonest. THEOREM 1. Given a negotiation framework, if for every agent the set of simple attributes doesnt contain a strongly dominant attribute, then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes. So far, we have been concentrating on the efficiency issue while leaving the fairness issue aside. A fair framework does not only support a more satisfactory distribution of utility among the agents, but also often a good measure to prevent misrepresentation of private information by the agents. Of the three solutions presented above, the utilitarian solution does not support fairness. On the other hand, Nash [16] proves that the Nash solution satisfies the above four axioms for the cooperative bargaining games and is considered a fair solution. The egalitarian solution is another mechanism to achieve fairness by essentially helping the worst off. The problem with these solutions, as discussed earlier, is that they are vulnerable to strategic behaviours when one of the attributes strongly dominates the rest of attributes. However, there is yet another solution that aims to guarantee fairness, the minimax solution. That is, the utility of the agent with maximum utility is minimised. Its obvious that the minimax solution produces inefficient outcomes. However, to get around this problem (given that the Pareto-optimal set can be tractably computed), we can apply this solution over the Pareto-optimal set only. Let POSet ⊆ V alSimp be the Pareto-optimal subset of the simple outcomes, the minimax solution is defined to be the solution of the following optimisation problem. arg min v∈P OSet max i∈N ui(v) While overall efficiency often suffers under a minimax solution, i.e., the sum of all agents'' utilities are often lower than under other solutions, it can be shown that the minimax solution is less vulnerable to manipulation. 512 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) THEOREM 2. Given a negotiation framework, under the minimax solution, if the negotiators are uncertain about their opponents'' preferences then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes. That is, even when there is only one single simple attribute, if an agent is uncertain whether the other agents most preferred resolution is also his own most preferred resolution then he should opt for truth-telling as the optimal strategy. 4.2 Optimisation on continuous attributes When the attributes take values from infinite sets, we assume that they are continuous. This is similar to the common practice in operations research in which linear programming solutions/techniques are applied to integer programming problems. We denote the number of continuous attributes by k, i.e., Att = Simp ∪ Simp and |Simp| = k. Then, the outcome space O can be represented as follows: O = ( Q j∈Simp V alj) × ( Q l∈Simp V all), where Q l∈Simp V all ⊆ Rk is the continuous component of O. Let Oc denote the set Q l∈Simp V all. Well refer to Oc as the feasible set and assume that Oc is closed and convex. After carrying out the optimisation over the set of simple attributes, we are able to assign the optimal values to the simple attributes from Simp. Thus, we reduce the original problem to the problem of searching for optimal (and fair) outcomes within the feasible set Oc . Recall that, by Assumption 1, we can characterise agent is utility function as follows: ui([v∗ Simp, vSimp]) = C + wi 2 ∗ ui,2([vSimp]), where C is the constant wi 1 ∗ ui,1([v∗ Simp]) and v∗ Simp denotes the optimal values of the simple attributes in Simp. Hence, without loss of generality (albeit with a blatant abuse of notation), we can take the agent is utility function as ui : Rk → [0, 1]. Accordingly we will also take the set of outcomes under consideration by the agents to be the feasible set Oc . We now state another assumption to be used in this section: ASSUMPTION 2. The negotiators'' utility functions can be described by continuously differentiable and concave functions ui : Rk → [0, 1], (i = 1, 2). It should be emphasised that we do not assume that agents explicitly know their utility functions. For the method to be described in the following to work, we only assume that the agents know the relevant information, e.g. at certain point within the feasible set Oc , the gradient direction of their own utility functions and some section of their respective indifference curves. Assume that a tentative agreement (which is a point x ∈ Rk ) is currently on the table, the process for the agents to jointly improve this agreement in order to reach a Pareto-optimal agreement can be described as follows. The mediator asks the negotiators to discretely submit their respective gradient directions at x, i.e., ∇u1(x) and ∇u2(x). Note that the goal of the process to be described here is to search for agreements that are more efficient than the tentative agreement currently on the table. That is, we are searching for points x within the feasible set Oc such that moving to x from the current tentative agreement x brings more gains to at least one of the agents while not hurting any of the agents. Due to the assumption made above, i.e. the feasible set Oc is bounded, the conditions for an alternative x ∈ Oc to be efficient vary depending on the position of x. The following results are proved in [9]: Let B(x) = 0 denote the equation of the boundary of Oc , defining x ∈ Oc iff B(x) ≥ 0. An alternative x∗ ∈ Oc is efficient iff, either A. x∗ is in the interior of Oc and the parties are in local strict opposition at x∗ , i.e., ∇u1(x∗ ) = −γ∇u2(x∗ ) (2) where γ > 0; or B. x∗ is on the boundary of Oc , and for some α, β ≥ 0: α∇u1(x∗ ) + β∇u2(x∗ ) = ∇B(x∗ ) (3) We are now interested in answering the following questions: (i) What is the initial tentative agreement x0? (ii) How to find the more efficient agreement xh+1, given the current tentative agreement xh? 4.2.1 Determining a fair initial tentative agreement It should be emphasised that the choice of the initial tentative agreement affects the fairness of the final agreement to be reached by the presented method. For instance, if the initial tentative agreement x0 is chosen to be the most preferred alternative to one of the agents then it is also a Pareto-optimal outcome, making it impossible to find any joint improvement from x0. However, if x0 will then be chosen to be the final settlement and if x0 turns out to be the worst alternative to the other agent then this outcome is a very unfair one. Thus, its important that the choice of the initial tentative agreement be sensibly made. Ehtamo et al [3] present several methods to choose the initial tentative agreement (called reference point in their paper). However, their goal is to approximate the Pareto-optimal set by systematically choosing a set of reference points. Once an (approximate) Pareto-optimal set is generated, it is left to the negotiators to decide which of the generated Pareto-optimal outcomes to be chosen as the final settlement. That is, distributive negotiation will then be required to settle the issue. We, on the other hand, are interested in a fair initial tentative agreement which is not necessarily efficient. Improving a given tentative agreement to yield a Pareto-optimal agreement is considered in the next section. For each attribute j ∈ Simp, an agent i will be asked to discretely submit three values (from the set V alj): the most preferred value, denoted by pvi,j, the least preferred value, denoted by wvi,j, and a value that gives i an approximately average payoff, denoted by avi,j. (Note that this is possible because the set V alj is bounded.) If pv1,j and pv2,j are sufficiently close, i.e., |pv1,j − pv2,j| < Δ for some pre-defined Δ > 0, then pv1,j and pv2,j are chosen to be the two core values, denoted by cv1 and cv2. Otherwise, between the two values pv1,j and av1,j, we eliminate the one that is closer to wv2,j, the remaining value is denoted by cv1. Similarly, we obtain cv2 from the two values pv2,j and av2,j. If cv1 = cv2 then cv1 is selected as the initial value for the attribute j as part of the initial tentative agreement. Otherwise, without loss of generality, we assume that cv1 < cv2. The mediator selects randomly p values mv1, ... , mvp from the open interval (cv1, cv2), where p ≥ 1. The mediator then asks the agents to submit their valuations over the set of values {cv1, cv2, mv1, ... , mvp}. The value whose the two valuations of two agents are closest is selected as the initial value for the attribute j as part of the initial tentative agreement. The above procedure guarantees that the agents do not gain by behaving strategically. By performing the above procedure on every attribute j ∈ Simp, we are able to identify the initial tentative agreement x0 such that x0 ∈ Oc . The next step is to compute a new tentative agreement from an existing tentative agreement so that the new one would be more efficient than the existing one. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 513 4.2.2 Computing new tentative agreement Our procedure is a combination of the method of jointly improving direction introduced by Ehtamo et al [4] and a method we propose in the coming section. Basically, the idea is to see how strong the opposition the parties are in. If the two parties are in (local) weak opposition at the current tentative agreement xh, i.e., their improving directions at xh are close to each other, then the compromise direction proposed by Ehtamo et al [4] is likely to point to a better agreement for both agents. However, if the two parties are in local strong opposition at the current point xh then its unclear whether the compromise direction would really not hurt one of the agents whilst bringing some benefit to the other. We will first review the method proposed by Ehtamo et al [4] to compute the compromise direction for a group of negotiators at a given point x ∈ Oc . Ehtamo et al define a a function T(x) that describes the mediators choice for a compromise direction at x. For the case of two-party negotiations, the following bisecting function, denoted by T BS , can be defined over the interior set of Oc . Note that the closed set Oc contains two disjoint subsets: Oc = Oc 0 ∪Oc B , where Oc 0 denotes the set of interior points of Oc and Oc B denotes the boundary of Oc . The bisecting compromise is defined by a function T BS : Oc 0 → R2 , T BS (x) = ∇u1(x) ∇u1(x) + ∇u2(x) ∇u2(x) , x ∈ Oc 0. (4) Given the current tentative agreement xh (h ≥ 0), the mediator has to choose a point xh+1 along d = T(xh) so that all parties gain. Ehtamo et al then define a mechanism to generate a sequence of points and prove that when the generated sequence is bounded and when all generated points (from the sequence) belong to the interior set Oc 0 then the sequence converges to a weakly Paretooptimal agreement [4, pp. 59-60].5 As the above mechanism does not work at the boundary points of Oc , we will introduce a procedure that works everywhere in an alternative space Oc . Let x ∈ Oc and let θ(x) denote the angle between the gradients ∇u1(x) and ∇u2(x) at x. That is, θ(x) = arccos( ∇u1(x). ∇u2(x) ∇u1(x) . ∇u2(x) ) From Definition 2, it is obvious that the two parties are in local strict opposition (at x) iff θ(x) = π, and they are in local strong opposition iff π ≥ θ(x) > π/2, and they are in local weak opposition iff π/2 ≥ θ(x) ≥ 0. Note also that the two vectors ∇u1(x) and ∇u2(x) define a hyperplane, denoted by h∇(x), in the kdimensional space Rk . Furthermore, there are two indifference curves of agents 1 and 2 going through point x, denoted by IC1(x) and IC2(x), respectively. Let hT1(x) and hT2(x) denote the tangent hyperplanes to the indifference curves IC1(x) and IC2(x), respectively, at point x. The planes hT1(x) and hT2(x) intersect h∇(x) in the lines IS1(x) and IS2(x), respectively. Note that given a line L(x) going through the point x, there are two (unit) vectors from x along L(x) pointing to two opposite directions, denoted by L+ (x) and L− (x). We can now informally explain our solution to the problem of searching for joint gains. When it isnt possible to obtain a compromise direction for joint improvements at a point x ∈ Oc either because the compromise vector points to the space outside of the feasible set Oc or because the two parties are in local strong opposition at x, we will consider to move along the indifference curve of one party while trying to improve the utility of the other party. As 5 Let S be the set of alternatives, x∗ is weakly Pareto optimal if there is no x ∈ S such that ui(x) > ui(x∗ ) for all agents i. the mediator does not know the indifference curves of the parties, he has to use the tangent hyperplanes to the indifference curves of the parties at point x. Note that the tangent hyperplane to a curve is a useful approximation of the curve in the immediate vicinity of the point of tangency, x. We are now describing an iteration step to reach the next tentative agreement xh+1 from the current tentative agreement xh ∈ Oc . A vector v whose tail is xh is said to be bounded in Oc if ∃λ > 0 such that xh +λv ∈ Oc . To start, the mediator asks the negotiators for their gradients ∇u1(xh) and ∇u2(xh), respectively, at xh. 1. If xh is a Pareto-optimal outcome according to equation 2 or equation 3, then the process is terminated. 2. If 1 ≥ ∇u1(xh). ∇u2(xh) > 0 and the vector T BS (xh) is bounded in Oc then the mediator chooses the compromise improving direction d = T BS (xh) and apply the method described by Ehtamo et al [4] to generate the next tentative agreement xh+1. 3. Otherwise, among the four vectors ISσ i (xh), i = 1, 2 and σ = +/−, the mediator chooses the vector that (i) is bounded in Oc , and (ii) is closest to the gradient of the other agent, ∇uj (xh)(j = i). Denote this vector by T G(xh). That is, we will be searching for a point on the indifference curve of agent i, ICi(xh), while trying to improve the utility of agent j. Note that when xh is an interior point of Oc then the situation is symmetric for the two agents 1 and 2, and the mediator has the choice of either finding a point on IC1(xh) to improve the utility of agent 2, or finding a point on IC2(xh) to improve the utility of agent 1. To decide on which choice to make, the mediator has to compute the distribution of gains throughout the whole process to avoid giving more gains to one agent than to the other. Now, the point xh+1 to be generated lies somewhere on the intersection of ICi(xh) and the hyperplane defined by ∇ui(xh) and T G(xh). This intersection is approximated by T G(xh). Thus, the sought after point xh+1 can be generated by first finding a point yh along the direction of T G(xh) and then move from yh to the same direction of ∇ui(xh) until we intersect with ICi(xh). Mathematically, let ζ and ξ denote the vectors T G(xh) and ∇ui(xh), respectively, xh+1 is the solution to the following optimisation problem: max λ1,λ2∈L uj(xh + λ1ζ + λ2ξ) s.t. xh+λ1ζ+λ2ξ ∈ Oc , and ui(xh+λ1ζ+λ2ξ) = ui(xh), where L is a suitable interval of positive real numbers; e.g., L = {λ|λ > 0}, or L = {λ|a < λ ≤ b}, 0 ≤ a < b. Given an initial tentative agreement x0, the method described above allows a sequence of tentative agreements x1, x2, ... to be iteratively generated. The iteration stops whenever a weakly Pareto optimal agreement is reached. THEOREM 3. If the sequence of agreements generated by the above method is bounded then the method converges to a point x∗ ∈ Oc that is weakly Pareto optimal. 5. CONCLUSION AND FUTURE WORK In this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents'' objectives and utilities. The focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or create value. 514 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) We have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents. Furthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome. That is, the approach that we describe aims for both efficiency, in the sense that it produces Pareto optimal outcomes (i.e. no aspect can be improved for one of the parties without worsening the outcome for another party), and also for fairness, which chooses optimal solutions which distribute gains amongst the agents in some appropriate manner. We have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes. For simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution. In order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation. We have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes. For non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution. The approach presented in this paper is similar to the ideas behind negotiation analysis [18]. Ehtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations. The relation of their approach to our approach is discussed in the preceding section. Lai et al [12] provide an alternative approach to integrative negotiation. While their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear. Zhang et at [22] discuss the use of integrative negotiation in agent organisations. They assume that agents are honest. Their main result is an experiment showing that in some situations, agents'' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation. Jonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator. Thus, their approach can be considered a complement of ours. Their experimental results show that agents can reach Paretooptimal outcomes using their approach. The details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done. There is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes. In order to provide a complete framework we are also working on the distributive phase using the mediator. Acknowledgement The authors acknowledge financial support by ARC Dicovery Grant (2006-2009, grant DP0663147) and DEST IAP grant (2004-2006, grant CG040014). The authors would like to thank Lawrence Cavedon and the RMIT Agents research group for their helpful comments and suggestions. 6. REFERENCES [1] F. Alemi, P. Fos, and W. Lacorte. A demonstration of methods for studying negotiations between physicians and health care managers. Decision Science, 21:633-641, 1990. [2] M. Ehrgott. Multicriteria Optimization. Springer-Verlag, Berlin, 2000. [3] H. Ehtamo, R. P. Hamalainen, P. Heiskanen, J. Teich, M. Verkama, and S. Zionts. Generating pareto solutions in a two-party setting: Constraint proposal methods. Management Science, 45(12):1697-1709, 1999. [4] H. Ehtamo, E. Kettunen, and R. P. Hmlinen. Searching for joint gains in multi-party negotiations. European Journal of Operational Research, 130:54-69, 2001. [5] P. Faratin. Automated Service Negotiation Between Autonomous Computational Agents. PhD thesis, University of London, 2000. [6] A. Foroughi. Minimizing negotiation process losses with computerized negotiation support systems. The Journal of Applied Business Research, 14(4):15-26, 1998. [7] C. M. Jonker, V. Robu, and J. Treur. An agent architecture for multi-attribute negotiation using incomplete preference information. J. Autonomous Agents and Multi-Agent Systems, (to appear). [8] R. L. Keeney and H. Raiffa. Decisions with Multiple Objectives: Preferences and Value Trade-Offs. John Wiley and Sons, Inc., New York, 1976. [9] G. Kersten and S. Noronha. Rational agents, contract curves, and non-efficient compromises. IEEE Systems, Man, and Cybernetics, 28(3):326-338, 1998. [10] M. Klein, P. Faratin, H. Sayama, and Y. Bar-Yam. Protocols for negotiating complex contracts. IEEE Intelligent Systems, 18(6):32-38, 2003. [11] S. Kraus, J. Wilkenfeld, and G. Zlotkin. Multiagent negotiation under time constraints. Artificial Intelligence Journal, 75(2):297-345, 1995. [12] G. Lai, C. Li, and K. Sycara. Efficient multi-attribute negotiation with incomplete information. Group Decision and Negotiation, 15:511-528, 2006. [13] D. Lax and J. Sebenius. The manager as negotiator: The negotiators dilemma: Creating and claiming value, 2nd ed. In S. Goldberg, F. Sander & N. Rogers, editors, Dispute Resolution, 2nd ed., pages 49-62. Little Brown & Co., 1992. [14] M. Lomuscio and N. Jennings. A classification scheme for negotiation in electronic commerce. In Agent-Mediated Electronic Commerce: A European Agentlink Perspective. Springer-Verlag, 2001. [15] R. Maes and A. Moukas. Agents that buy and sell. Communications of the ACM, 42(3):81-91, 1999. [16] J. Nash. Two-person cooperative games. Econometrica, 21(1):128-140, April 1953. [17] H. Raiffa. The Art and Science of Negotiation. Harvard University Press, Cambridge, USA, 1982. [18] H. Raiffa, J. Richardson, and D. Metcalfe. Negotiation Analysis: The Science and Art of Collaborative Decision Making. Belknap Press, Cambridge, MA, 2002. [19] T. Sandholm. Agents in electronic commerce: Component technologies for automated negotiation and coalition formation. JAAMAS, 3(1):73-96, 2000. [20] J. Sebenius. Negotiation analysis: A characterization and review. Management Science, 38(1):18-38, 1992. [21] L. Weingart, E. Hyder, and M. Pietrula. Knowledge matters: The effect of tactical descriptions on negotiation behavior and outcome. Tech. Report, CMU, 1995. [22] X. Zhang, V. R. Lesser, and T. Wagner. Integrative negotiation among agents situated in organizations. IEEE Trans. on Systems, Man, and Cybernetics, Part C, 36(1):19-30, 2006. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 515 Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory ABSTRACT It is well established by conflict theorists and others that successful negotiation should incorporate "creating value" as well as "claiming value." Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. 1. INTRODUCTION Given that negotiation is perhaps one of the oldest activities in the history of human communication, it's perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21]. Raiffa [17] and Sebenius [20] provide analyses on the negotiators' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons. According to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as" creating value" and" claiming value." They argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension. Negotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation. They refer to this problem as the Negotiator's Dilemma. We argue that the Negotiator's Dilemma is essentially informationbased, due to the private information held by the agents. Such private information contains both the information that implies the agent's bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength. For instance, when bargaining to sell a house to a potential buyer, the seller would try to hide her actual reserve price as much as possible for she hopes to reach an agreement at a much higher price than her reserve price. On the other hand, the outside options available to her (e.g. other buyers who have expressed genuine interest with fairly good offers) consist in the information that improves her bargaining strength about which she would like to convey to her opponent. But at the same time, her opponent is well aware of the fact that it is her incentive to boost her bargaining strength and thus will not accept every information she sends out unless it is substantiated by evidence. Coming back to the Negotiator's Dilemma, it's not always possible to separate the integrative bargaining process from the distributive bargaining process. In fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process. This is because a negotiator could use the information about his opponent's interests against her during the distributive negotiation process. That is, a negotiator may refuse to concede on an important conflicting issue by claiming that he has made a major concession (on another issue) to meet his opponent's interests even though the concession he made could be insignificant to him. For instance, few buyers would start a bargaining with a dealer over a deal for a notebook computer by declaring that he is most interested in an extended warranty for the item and therefore prepared to pay a high price to get such an extended warranty. Negotiation Support Systems (NSSs) and negotiating software agents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]). However, because of the Negotiator's Dilemma and given even bargaining power and incomplete information, the following two undesirable situations often arise: (i) negotiators reach inefficient compromises, or (ii) negotiators engage in a deadlock situation in which both negotiators refuse to act upon with incomplete information and at the same time do not want to disclose more information. In this paper, we argue for the role of a mediator to resolve the above two issues. The mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device. To take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed. Section 2 provides background on MCDM theory and the negotiation framework. Section 3 formulates the problem. In Section 4, we discuss our approach to integrative negotiation. Section 5 discusses the future work with some concluding remarks. 2. BACKGROUND 2.1 Multi-criteria decision making theory Let A denote the set of feasible alternatives available to a decision maker M. As an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1,..., m). (Attributes are also referred to as issues, or decision variables.) A typical decision maker also has several objectives X1,..., Xk. We assume that Xi, (i = 1,..., k), maps the alternatives to real numbers. Thus, a tuple (x1,..., xk) = (X1 (a),..., Xk (a)) denotes the consequence of the act a to the decision maker M. By definition, objectives are statements that delineate the desires of a decision maker. Thus, M wishes to maximise his objectives. However, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker's objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another. For instance, most businesses and public services have objectives like "minimise cost" and "maximise the quality of services." Since better services can often only be attained for a price, these objectives conflict. Due to the conflicting nature of a decision maker's objectives, M usually has to settle at a compromise solution. That is, he may have to choose an act a E A that does not optimise every objective. This is the topic of the multi-criteria decision making theory. Part of the solution to this problem is that M has to try to identify the Pareto frontier in the consequence space 1 (X1 (a),..., Xk (a))} aEA. DEFINITION 1. (Dominant) Let x = (x1,..., xk) and x' = (x' 1,..., x  k) be two consequences. x dominates x' iff xi> x' i for all i, and the inequality is strict for at least one i. The Pareto frontier in a consequence space then consists of all consequences that are not dominated by any other consequence. This is illustrated in Fig. 1 in which an alternative consists of two attributes d1 and d2 and the decision maker tries to maximise the two objectives X1 and X2. A decision a E A whose consequence does not lie on the Pareto frontier is inefficient. While the Pareto Figure 1: The Pareto frontier frontier allows M to avoid taking inefficient decisions, M still has to decide which of the efficient consequences on the Pareto frontier is most preferred by him. MCDM theorists introduce a mechanism to allow the objective components of consequences to be normalised to the payoff valuations for the objectives. Consequences can then be ordered: if the gains in satisfaction brought about by C1 (in comparison to C2) equals to the losses in satisfaction brought about by C1 (in comparison to C2), then the two consequences C1 and C2 are considered indifferent. M can now construct the set of indifference curves1 in the consequence space (the dashed curves in Fig. 1). The most preferred indifference curve that intersects with the Pareto frontier is in focus: its intersection with the Pareto frontier is the sought after consequence (i.e., the optimal consequence in Fig. 1). 2.2 A negotiation framework A multi-agent negotiation framework consists of: 1. A set of two negotiating agents N = 11, 2}. 2. A set of attributes Att = 1α1,..., αm} characterising the issues the agents are negotiating over. Each attribute α can take a value from the set Valα; 3. A set of alternative outcomes O. An outcome o E O is represented by an assignment of values to the corresponding attributes in Att. 4. Agents' utility: Based on the theory of multiple-criteria decision making [8], we define the agents' utility as follows: • Objectives: Agent i has a set of ni objectives, or interests; denoted by j (j = 1,..., ni). To measure how much an outcome o fulfills an objective j to an agent i, we use objective functions: for each agent i, we define i's interests using the objective vector function fi = [fij]: O--* Rni. • Value functions: Instead of directly evaluating an outcome o, agent i looks at how much his objectives are fulfilled and will make a valuation based on these more basic criteria. Thus, for each agent i, there is a value function σi: Rni--* R. In particular, Raiffa [17] shows how to systematically construct an additive value function to each party involved in a negotiation. • Utility: Now, given an outcome o E O, an agent i is able to determine its value, i.e., σi (fi (o)). However, a negotiation infrastructure is usually required to facilitate negotiation. This might involve other mechanisms and factors/parties, e.g., a mediator, a legal institution, participation fees, etc. . The standard way to implement such a thing is to allow money 1In fact, given the k-dimensional space, these should be called indifference surfaces. However, we will not bog down to that level of details. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509 and side-payments. In this paper, we ignore those side-effects and assume that agent i's utility function v, i is normalised so that v, i: O → [0, 1]. EXAMPLE 1. There are two agents, A and B. Agent A has a task T that needs to be done and also 100 units of a resource R. Agent B has the capacity to perform task T and would like to obtain at least 10 and at most 20 units of the resource R. Agent B is indifferent on any amount between 10 and 20 units of the resource R. The objective functions for both agents A and B are cost and revenue. And they both aim at minimising costs while maximising revenues. Having T done generates for A a revenue rA, T while doing T incurs a cost cB, T to B. Agent B obtains a revenue rB, R for each unit of the resource R while providing each unit of the resource R costs agent A cA, R. Assuming that money transfer between agents is possible, the set Att then contains three attributes: • T, taking values from the set {0, 1}, indicates whether the task T is assigned to agent B; • R, taking values from the set of non-negative integer, indicates the amount of resource R being allocated to agent B; and • MT, taking values from R, indicates the payment p to be transferred from A to B. 3. PROBLEM FORMALISATION Consider Example 1, assume that rA, T =$150 and cB, T = $100 and rB, R =$10 and cA, R = $7. That is, the revenues generated for A exceeds the costs incurred to B to do task T, and B values resource R more highly than the cost for A to provide it. The optimal solution to this problem scenario is to assign task T to agent B and to allocate 20 units of resource R (i.e., the maximal amount of resource R required by agent B) from agent A to agent B. This outcome regarding the resource and task allocation problems leaves payoffs of$10 to agent A and \$100 to agent B. 2 Any other outcome would leave at least one of the agents worse off. In other words, the presented outcome is Pareto-efficient and should be part of the solution outcome for this problem scenario. However, as the agents still have to bargain over the amount of money transfer p, neither agent would be willing to disclose their respective costs and revenues regarding the task T and the resource R. As a consequence, agents often do not achieve the optimal outcome presented above in practice. To address this issue, we introduce a mediator to help the agents discover better agreements than the ones they might try to settle on. Note that this problem is essentially the problem of searching for joint gains in a multilateral negotiation in which the involved parties hold strategic information, i.e., the integrative part in a negotiation. In order to help facilitate this process, we introduce the role of a neutral mediator. Before formalising the decision problems faced by the mediator and the 2Certainly, without money transfer to compensate agent A, this outcome is not a fair one. negotiating agents, we discuss the properties of the solution outcomes to be achieved by the mediator. In a negotiation setting, the two typical design goals would be: • Efficiency: Avoid the agents from settling on an outcome that is not Pareto-optimal; and • Fairness: Avoid agreements that give the most of the gains to a subset of agents while leaving the rest with too little. The above goals are axiomatised in Nash's seminal work [16] on cooperative negotiation games. Essentially, Nash advocates for the following properties to be satisfied by solution to the bilateral negotiation problem: (i) it produces only Pareto-optimal outcomes; (ii) it is invariant to affine transformation (to the consequence space); (iii) it is symmetric; and (iv) it is independent from irrelevant alternatives. A solution satisfying Nash's axioms is called a Nash bargaining solution. It then turns out that, by taking the negotiators' utilities as its objectives the mediator itself faces a multi-criteria decision making problem. The issues faced by the mediator are: (i) the mediator requires access to the negotiators' utility functions, and (ii) making (fair) tradeoffs between different agents' utilities. Our methods allow the agents to repeatedly interact with the mediator so that a Nash solution outcome could be found by the parties. Informally, the problem faced by both the mediator and the negotiators is construction of the indifference curves. Why are the indifference curves so important? • To the negotiators, knowing the options available along indifference curves opens up opportunities to reach more efficient outcomes. For instance, consider an agent A who is presenting his opponent with an offer θA which she refuses to accept. Rather than having to concede, A could look at his indifference curve going through θA and choose another proposal θ ~ A. To him, θA and θ ~ A are indifferent but θ ~ A could give some gains to B and thus will be more acceptable to B. In other words, the outcome θ ~ A is more efficient than θA to these two negotiators. • To the mediator, constructing indifference curves requires a measure of fairness between the negotiators. The mediator needs to determine how much utility it needs to take away from the other negotiators to give a particular negotiator a specific gain G (in utility). In order to search for integrative solutions within the outcome space O, we characterise the relationship between the agents over the set of attributes Att. As the agents hold different objectives and have different capacities, it may be the case that changing between two values of a specific attribute implies different shifts in utility of the agents. However, the problem of finding the exact Paretooptimal set3 is NP-hard [2]. Our approach is thus to solve this optimisation problem in two steps. In the first steps, the more manageable attributes will be solved. These are attributes that take a finite set of values. The result of this step would be a subset of outcomes that contains the Pareto-optimal set. In the second step, we employ an iterative procedure that allows the mediator to interact with the negotiators to find joint improvements that move towards a Pareto-optimal outcome. This approach will not work unless the attributes from Att 3The Pareto-optimal set is the set of outcomes whose consequences (in the consequence space) correspond to the Pareto frontier. otherwise. 510 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) are independent. Most works on multi-attribute, or multi-issue, negotiation (e.g. [17]) assume that the attributes or the issues are independent, resulting in an additive value function for each agent .4 -- set Att \ S. Assume that vS and v'S are two assignments of values to the attributes of S and v1 ¯ S, v2 ¯ S are two arbitrary value assignments to the attributes of ¯ S, then (ui ([vS, v1 ¯ S]) ui ([v'S, v2 ¯ S])) = 4. MEDIATOR-BASED BILATERAL NEGOTIATIONS As discussed by Lax and Sebenius [13], under incomplete information the tension between creating and claiming values is the primary cause of inefficient outcomes. This can be seen most easily in negotiations involving two negotiators; during the distributive phase of the negotiation, the two negotiators's objectives are directly opposing each other. We will now formally characterise this relationship between negotiators by defining the opposition between two negotiating parties. The following exposition will be mainly reproduced from [9]. Assuming for the moment that all attributes from Att take values from the set of real numbers R, i.e., V alj C R for all j E Att. We further assume that the set 0 = XjEAttV alj of feasible outcomes is defined by constraints that all parties must obey and 0 is convex. Now, an outcome o E 0 is just a point in the m-dimensional space of real numbers. Then, the questions are: (i) from the point of view of an agent i, is o already the best outcome for i? (ii) if o is not the best outcome for i then is there another outcome o' such that o' gives i a better utility than o and o' does not cause a utility loss to the other agent j in comparison to o? The above questions can be answered by looking at the directions of improvement of the negotiating parties at o, i.e., the directions in the outcome space 0 into which their utilities increase at point o. Under the assumption that the parties' utility functions ui are differentiable concave, the set of all directions of improvement for a party at a point o can be defined in terms of his most preferred, or gradient, direction at that point. When the gradient direction Vui (o) of agent i at point o is outright opposing to the gradient direction Vuj (o) of agent j at point o then the two parties strongly disagree at o and no joint improvements can be achieved for i and j in the locality surrounding o. Since opposition between the two parties can vary considerably over the outcome space (with one pair of outcomes considered highly antagonistic and another pair being highly cooperative), we need to describe the local properties of the relationship. We begin with the opposition at any point of the outcome space Rm. The following definition is reproduced from [9]: 3. The parties are in local weak opposition at a point x E Rm iff Vu1 (x) - Vu2 (x)> 0, i.e., iff the gradients at x of the two utility functions form an acute or right angle. 4. The parties are in local strong opposition at a point x E Rm iff Vu1 (x) - Vu2 (x) <0, i.e., iff the gradients at x form an obtuse angle. 5. The parties are in global strict (nonstrict, weak, strong) opposition ifffor every x E Rm they are in local strict (nonstrict, weak, strong) opposition. Global strict and nonstrict oppositions are complementary cases. Essentially, under global strict opposition the whole outcome space 0 becomes the Pareto-optimal set as at no point in 0 can the negotiating parties make a joint improvement, i.e., every point in 0 is a Pareto-efficient outcome. In other words, under global strict opposition the outcome space 0 can be flattened out into a single line such that for each pair of outcomes x, y E 0, u1 (x) <u1 (y) iff u2 (x)> u2 (y), i.e., at every point in 0, the gradient of the two utility functions point to two different ends of the line. Intuitively, global strict opposition implies that there is no way to obtain joint improvements for both agents. As a consequence, the negotiation degenerates to a distributive negotiation, i.e., the negotiating parties should try to claim as much shares from the negotiation issues as possible while the mediator should aim for the fairness of the division. On the other hand, global nonstrict opposition allows room for joint improvements and all parties might be better off trying to realise the potential gains by reaching Pareto-efficient agreements. Weak and strong oppositions indicate different levels of opposition. The weaker the opposition, the more potential gains can be realised making cooperation the better strategy to employ during negotiation. On the other hand, stronger opposition suggests that the negotiating parties tend to behave strategically leading to misrepresentation of their respective objectives and utility functions and making joint gains more difficult to realise. We have been temporarily making the assumption that the outcome space 0 is the subset of Rm. In many real-world negotiations, this assumption would be too restrictive. We will continue our exposition by lifting this restriction and allowing discrete attributes. However, as most negotiations involve only discrete issues with a bounded number of options, we will assume that each attribute takes values either from a finite set or from the set of real numbers R. In the rest of the paper, we will refer to attributes whose values are from finite sets as simple attributes and attributes whose values are from R as continuous attributes. The notions of local oppositions, i.e., strict, nonstrict, weak and strong, are not applicable to outcome spaces that contain simple attributes and nor are the notions of global weak and strong oppositions. However, the notions of global strict and nonstrict oppositions can be generalised for outcome spaces that contain simple attributes. DEFINITION 3. Given an outcome space 0, the parties are in global strict opposition iff b  x, y E 0, u1 (x) <u1 (y) iff u2 (x)> u2 (y). The parties are in global nonstrict opposition if they are not in global strict opposition. 4.1 Optimisation on simple attributes In order to extract the optimal values for a subset of attributes, in the first step of this optimisation process the mediator requests the negotiators to submit their respective utility functions over the set of simple attributes. Let Simp C Att denote the set of all simple attributes from Att. Note that, due to Assumption 1, agent i's (ui ([vS, v1 ¯ S])--ui ([v'S, v2 ¯ S])). That is, the utility function of agent i will be defined on the attributes from S independently of any value assignment to other attributes. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511 utility function can be characterised as follows: ui ([VSimp, VSimp]) = wi1 * ui,1 ([VSimp]) + wi2 * ui,2 ([VSimp]), where Simp = Att \ Simp, and ui,1 and ui,2 are the utility components of ui over the sets of attributes Simp and Simp, respectively, and 0 <wi1, wi2 <1 and wi1 + wi2 = 1. As attributes are independent of each other regarding the agents' utility functions, the optimisation problem over the attributes from Simp can be carried out by fixing ui ([VSimp]) to a constant C, and then search for the optimal values within the set of attributes Simp. Now, how does the mediator determine the optimal values for the attributes in Simp? Several well-known optimisation strategies could be applicable here: • The utilitarian solution: The sum of the agents' utilities are maximised. Thus, the optimal values are the solution of the following optimisation problem: • The Nash solution: The product of the agents' utilities are • The egalitarian solution (aka. the maximin solution): The utility of the agent with minimum utility is maximised. Thus, the optimal values are the solution of the following optimisation problem: The question now is of course whether a negotiator has the incentive to misrepresent his utility function. First of all, recall that the agents' utility functions are bounded, i.e., do E O. 0 <ui (o) <1. Thus, the agents have no incentive to overstate their utility regarding an outcome o: If o is the most preferred outcome to an agent i then he already assigns the maximal utility to o. On the other hand, if o is not the most preferred outcome to i then by overstating the utility he assigns to o, the agent i runs the risk of having to settle on an agreement which would give him less payoffs than he is supposed to receive. However, agents do have an incentive to understate their utility if the final settlement will be based on the above solutions alone. Essentially, the mechanism to avoid an agent to understate his utility regarding particular outcomes is to guarantee a certain measure of fairness for the final settlement. That is, the agents lose the incentive to be dishonest to obtain gains from taking advantage of the known solutions to determine the settlement outcome for they would be offset by the fairness maintenance mechanism. Firsts, we state an easy lemma. LEMMA 1. When Simp contains one single attributes, the agents have the incentive to understate their utility functions regarding outcomes that are not attractive to them. Byway of illustration, consider the set Simp containing only one attribute that could take values from the finite set {A, B, C, D}. Assume that negotiator 1 assigns utilities of 0.4, 0.7, 0.9, and 1 to A, B, C, and D, respectively. Assume also that negotiator 2 assigns utilities of 1, 0.9, 0.7, and 0.4 to A, B, C, and D, respectively. If agent 1 misrepresents his utility function to the mediator by reporting utility 0 for all values A, B and C and utility 1 for value D then the agent 2 who plays honestly in his report to the mediator will obtain the worst outcome D given any of the above solutions. Note that agent 1 doesn't need to know agent 2's utility function, nor does he need to know the strategy employed by agent 2. As long as he knows that the mediator is going to employ one of the above three solutions, then the above misrepresentation is the dominant strategy for this game. However, when the set Simp contains more than one attribute and none of the attributes strongly dominate the other attributes then the above problem disminishes by itself thanks to the integrative solution. We of course have to define clearly what it means for an attribute to strongly dominate other attributes. Intuitively, if most of an agent's utility concentrates on one of the attributes then this attribute strongly dominates other attributes. We again appeal to the Assumption 1 on additivity of utility functions to achieve a measure of fairness within this negotiation setting. Due to Assumption 1, we can characterise agent i's utility component over the set of attributes Simp by the following equation: Then, an attribute f E Simp strongly dominates the rest of the attributes in Simp (for agent i) iff wi ~> PjE (Simp _ ~) wij. Attribute f is said to be strongly dominant (for agent i) wrt. the set of simple attributes Simp. The following theorem shows that if the set of attributes Simp does not contain a strongly dominant attribute then the negotiators have no incentive to be dishonest. THEOREM 1. Given a negotiation framework, if for every agent the set of simple attributes doesn't contain a strongly dominant attribute, then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes. So far, we have been concentrating on the efficiency issue while leaving the fairness issue aside. A fair framework does not only support a more satisfactory distribution of utility among the agents, but also often a good measure to prevent misrepresentation of private information by the agents. Of the three solutions presented above, the utilitarian solution does not support fairness. On the other hand, Nash [16] proves that the Nash solution satisfies the above four axioms for the cooperative bargaining games and is considered a fair solution. The egalitarian solution is another mechanism to achieve fairness by essentially helping the worst off. The problem with these solutions, as discussed earlier, is that they are vulnerable to strategic behaviours when one of the attributes strongly dominates the rest of attributes. However, there is yet another solution that aims to guarantee fairness, the minimax solution. That is, the utility of the agent with maximum utility is minimised. It's obvious that the minimax solution produces inefficient outcomes. However, to get around this problem (given that the Pareto-optimal set can be tractably computed), we can apply this solution over the Pareto-optimal set only. Let POSet C V alSimp be the Pareto-optimal subset of the simple outcomes, the minimax solution is defined to be the solution of the following optimisation problem. arg min vEP OSet While overall efficiency often suffers under a minimax solution, i.e., the sum of all agents' utilities are often lower than under other solutions, it can be shown that the minimax solution is less vulnerable to manipulation. where 512 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) THEOREM 2. Given a negotiation framework, under the minimax solution, if the negotiators are uncertain about their opponents' preferences then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes. That is, even when there is only one single simple attribute, if an agent is uncertain whether the other agent's most preferred resolution is also his own most preferred resolution then he should opt for truth-telling as the optimal strategy. 4.2 Optimisation on continuous attributes When the attributes take values from infinite sets, we assume that they are continuous. This is similar to the common practice in operations research in which linear programming solutions/techniques are applied to integer programming problems. We denote the number of continuous attributes by k, i.e., Att = Simp ∪ Simp and | Simp | = k. Then, the outcome space O can be represented as follows: O = (QjESimp V alj) × (QlESimp Vall), where QlESimp V all ⊆ Rk is the continuous component of O. Let Oc denote the set QlESimp V all. We'll refer to Oc as the feasible set and assume that Oc is closed and convex. After carrying out the optimisation over the set of simple attributes, we are able to assign the optimal values to the simple attributes from Simp. Thus, we reduce the original problem to the problem of searching for optimal (and fair) outcomes within the feasible set Oc. Recall that, by Assumption 1, we can characterise agent i's utility function as follows: where C is the constant wi1 ∗ ui,1 ([v * Simp]) and v * Simp denotes the optimal values of the simple attributes in Simp. Hence, without loss of generality (albeit with a blatant abuse of notation), we can take the agent i's utility function as ui: Rk → [0, 1]. Accordingly we will also take the set of outcomes under consideration by the agents to be the feasible set Oc. We now state another assumption to be used in this section: ASSUMPTION 2. The negotiators' utility functions can be described by continuously differentiable and concave functions ui: Rk → [0, 1], (i = 1, 2). It should be emphasised that we do not assume that agents explicitly know their utility functions. For the method to be described in the following to work, we only assume that the agents know the relevant information, e.g. at certain point within the feasible set Oc, the gradient direction of their own utility functions and some section of their respective indifference curves. Assume that a tentative agreement (which is a point x ∈ Rk) is currently on the table, the process for the agents to jointly improve this agreement in order to reach a Pareto-optimal agreement can be described as follows. The mediator asks the negotiators to discretely submit their respective gradient directions at x, i.e., ∇ u1 (x) and ∇ u2 (x). Note that the goal of the process to be described here is to search for agreements that are more efficient than the tentative agreement currently on the table. That is, we are searching for points x' within the feasible set Oc such that moving to x' from the current tentative agreement x brings more gains to at least one of the agents while not hurting any of the agents. Due to the assumption made above, i.e. the feasible set Oc is bounded, the conditions for an alternative x ∈ Oc to be efficient vary depending on the position of x. The following results are proved in [9]: Let B (x) = 0 denote the equation of the boundary of Oc, defining x ∈ Oc iff B (x) ≥ 0. An alternative x * ∈ Oc is efficient iff, We are now interested in answering the following questions: (i) What is the initial tentative agreement x0? (ii) How to find the more efficient agreement xh +1, given the current tentative agreement xh? 4.2.1 Determining a fair initial tentative agreement It should be emphasised that the choice of the initial tentative agreement affects the fairness of the final agreement to be reached by the presented method. For instance, if the initial tentative agreement x0 is chosen to be the most preferred alternative to one of the agents then it is also a Pareto-optimal outcome, making it impossible to find any joint improvement from x0. However, if x0 will then be chosen to be the final settlement and if x0 turns out to be the worst alternative to the other agent then this outcome is a very unfair one. Thus, it's important that the choice of the initial tentative agreement be sensibly made. Ehtamo et al [3] present several methods to choose the initial tentative agreement (called reference point in their paper). However, their goal is to approximate the Pareto-optimal set by systematically choosing a set of reference points. Once an (approximate) Pareto-optimal set is generated, it is left to the negotiators to decide which of the generated Pareto-optimal outcomes to be chosen as the final settlement. That is, distributive negotiation will then be required to settle the issue. We, on the other hand, are interested in a fair initial tentative agreement which is not necessarily efficient. Improving a given tentative agreement to yield a Pareto-optimal agreement is considered in the next section. For each attribute j ∈ Simp, an agent i will be asked to discretely submit three values (from the set V alj): the most preferred value, denoted by pvi, j, the least preferred value, denoted by wvi, j, and a value that gives i an approximately average payoff, denoted by avi, j. (Note that this is possible because the set V alj is bounded.) If pv1, j and pv2, j are sufficiently close, i.e., | pv1, j − pv2, j | <Δ for some pre-defined Δ> 0, then pv1, j and pv2, j are chosen to be the two "core" values, denoted by cv1 and cv2. Otherwise, between the two values pv1, j and av1, j, we eliminate the one that is closer to wv2, j, the remaining value is denoted by cv1. Similarly, we obtain cv2 from the two values pv2, j and av2, j. If cv1 = cv2 then cv1 is selected as the initial value for the attribute j as part of the initial tentative agreement. Otherwise, without loss of generality, we assume that cv1 <cv2. The mediator selects randomly p values mv1,..., mvp from the open interval (cv1, cv2), where p ≥ 1. The mediator then asks the agents to submit their valuations over the set of values {cv1, cv2, mv1,..., mvp}. The value whose the two valuations of two agents are closest is selected as the initial value for the attribute j as part of the initial tentative agreement. The above procedure guarantees that the agents do not gain by behaving strategically. By performing the above procedure on every attribute j ∈ Simp, we are able to identify the initial tentative agreement x0 such that x0 ∈ Oc. The next step is to compute a new tentative agreement from an existing tentative agreement so that the new one would be more efficient than the existing one. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 513 4.2.2 Computing new tentative agreement Our procedure is a combination of the method of jointly improving direction introduced by Ehtamo et al [4] and a method we propose in the coming section. Basically, the idea is to see how strong the opposition the parties are in. If the two parties are in (local) weak opposition at the current tentative agreement xh, i.e., their improving directions at xh are close to each other, then the compromise direction proposed by Ehtamo et al [4] is likely to point to a better agreement for both agents. However, if the two parties are in local strong opposition at the current point xh then it's unclear whether the compromise direction would really not hurt one of the agents whilst bringing some benefit to the other. We will first review the method proposed by Ehtamo et al [4] to compute the compromise direction for a group of negotiators at a given point x ∈ Oc. Ehtamo et al define a a function T (x) that describes the mediator's choice for a compromise direction at x. For the case of two-party negotiations, the following bisecting function, denoted by TBS, can be defined over the interior set of Oc. Note that the closed set Oc contains two disjoint subsets: Oc = Oc0 ∪ OcB, where Oc0 denotes the set of interior points of Oc and OcB denotes the boundary of Oc. The bisecting compromise is defined by a function TBS: Oc0 → R2, Given the current tentative agreement xh (h ≥ 0), the mediator has to choose a point xh +1 along d = T (xh) so that all parties gain. Ehtamo et al then define a mechanism to generate a sequence of points and prove that when the generated sequence is bounded and when all generated points (from the sequence) belong to the interior set Oc0 then the sequence converges to a weakly Paretooptimal agreement [4, pp. 59--60].5 As the above mechanism does not work at the boundary points of Oc, we will introduce a procedure that works everywhere in an alternative space Oc. Let x ∈ Oc and let θ (x) denote the angle between the gradients ∇ u1 (x) and ∇ u2 (x) at x. That is, From Definition 2, it is obvious that the two parties are in local strict opposition (at x) iff θ (x) = π, and they are in local strong opposition iff π ≥ θ (x)> π / 2, and they are in local weak opposition iff π / 2 ≥ θ (x) ≥ 0. Note also that the two vectors ∇ u1 (x) and ∇ u2 (x) define a hyperplane, denoted by h ∇ (x), in the kdimensional space Rk. Furthermore, there are two indifference curves of agents 1 and 2 going through point x, denoted by IC1 (x) and IC2 (x), respectively. Let hT1 (x) and hT2 (x) denote the tangent hyperplanes to the indifference curves IC1 (x) and IC2 (x), respectively, at point x. The planes hT1 (x) and hT2 (x) intersect h ∇ (x) in the lines IS1 (x) and IS2 (x), respectively. Note that given a line L (x) going through the point x, there are two (unit) vectors from x along L (x) pointing to two opposite directions, denoted by L + (x) and L − (x). We can now informally explain our solution to the problem of searching for joint gains. When it isn't possible to obtain a compromise direction for joint improvements at a point x ∈ Oc either because the compromise vector points to the space outside of the feasible set Oc or because the two parties are in local strong opposition at x, we will consider to move along the indifference curve of one party while trying to improve the utility of the other party. As 5Let S be the set of alternatives, x ∗ is weakly Pareto optimal if there is no x ∈ S such that ui (x)> ui (x ∗) for all agents i. the mediator does not know the indifference curves of the parties, he has to use the tangent hyperplanes to the indifference curves of the parties at point x. Note that the tangent hyperplane to a curve is a useful approximation of the curve in the immediate vicinity of the point of tangency, x. We are now describing an iteration step to reach the next tentative agreement xh +1 from the current tentative agreement xh ∈ Oc. A vector v whose tail is xh is said to be bounded in Oc if ∃ λ> 0 such that xh + λv ∈ Oc. To start, the mediator asks the negotiators for their gradients ∇ u1 (xh) and ∇ u2 (xh), respectively, at xh. 1. If xh is a Pareto-optimal outcome according to equation 2 or equation 3, then the process is terminated. 2. If 1 ≥ ∇ u1 (xh). ∇ u2 (xh)> 0 and the vector TBS (xh) is bounded in Oc then the mediator chooses the compromise improving direction d = T BS (xh) and apply the method described by Ehtamo et al [4] to generate the next tentative agreement xh +1. 3. Otherwise, among the four vectors ISσi (xh), i = 1, 2 and σ = + / −, the mediator chooses the vector that (i) is bounded in Oc, and (ii) is closest to the gradient of the other agent, ∇ uj (xh) (j = i). Denote this vector by TG (xh). That is, we will be searching for a point on the indifference curve of agent i, ICi (xh), while trying to improve the utility of agent j. Note that when xh is an interior point of Oc then the situa tion is symmetric for the two agents 1 and 2, and the mediator has the choice of either finding a point on IC1 (xh) to improve the utility of agent 2, or finding a point on IC2 (xh) to improve the utility of agent 1. To decide on which choice to make, the mediator has to compute the distribution of gains throughout the whole process to avoid giving more gains to one agent than to the other. Now, the point xh +1 to be generated lies somewhere on the intersection of ICi (xh) and the hyperplane defined by ∇ ui (xh) and TG (xh). This intersection is approximated by TG (xh). Thus, the sought after point xh +1 can be generated by first finding a point yh along the direction of TG (xh) and then move from yh to the same direction of ∇ ui (xh) until we intersect with ICi (xh). Mathematically, let ζ and ξ denote the vectors TG (xh) and ∇ ui (xh), respectively, xh +1 is the solution to the following optimisation problem: Given an initial tentative agreement x0, the method described above allows a sequence of tentative agreements x1, x2,...to be iteratively generated. The iteration stops whenever a weakly Pareto optimal agreement is reached. THEOREM 3. If the sequence of agreements generated by the above method is bounded then the method converges to a point x ∗ ∈ Oc that is weakly Pareto optimal. 5. CONCLUSION AND FUTURE WORK In this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents' objectives and utilities. The focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or "create value." max λ1, λ2 ∈ L 514 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) We have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents. Furthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome. That is, the approach that we describe aims for both efficiency, in the sense that it produces Pareto optimal outcomes (i.e. no aspect can be improved for one of the parties without worsening the outcome for another party), and also for fairness, which chooses optimal solutions which distribute gains amongst the agents in some appropriate manner. We have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes. For simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution. In order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation. We have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes. For non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution. The approach presented in this paper is similar to the ideas behind negotiation analysis [18]. Ehtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations. The relation of their approach to our approach is discussed in the preceding section. Lai et al [12] provide an alternative approach to integrative negotiation. While their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear. Zhang et at [22] discuss the use of integrative negotiation in agent organisations. They assume that agents are honest. Their main result is an experiment showing that in some situations, agents' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation. Jonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator. Thus, their approach can be considered a complement of ours. Their experimental results show that agents can reach Paretooptimal outcomes using their approach. The details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done. There is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes. In order to provide a complete framework we are also working on the distributive phase using the mediator.
Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory ABSTRACT It is well established by conflict theorists and others that successful negotiation should incorporate "creating value" as well as "claiming value." Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. 1. INTRODUCTION Given that negotiation is perhaps one of the oldest activities in the history of human communication, it's perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21]. Raiffa [17] and Sebenius [20] provide analyses on the negotiators' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons. According to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as" creating value" and" claiming value." They argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension. Negotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation. They refer to this problem as the Negotiator's Dilemma. We argue that the Negotiator's Dilemma is essentially informationbased, due to the private information held by the agents. Such private information contains both the information that implies the agent's bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength. Coming back to the Negotiator's Dilemma, it's not always possible to separate the integrative bargaining process from the distributive bargaining process. In fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process. This is because a negotiator could use the information about his opponent's interests against her during the distributive negotiation process. Negotiation Support Systems (NSSs) and negotiating software agents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]). In this paper, we argue for the role of a mediator to resolve the above two issues. The mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device. To take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed. Section 2 provides background on MCDM theory and the negotiation framework. Section 3 formulates the problem. In Section 4, we discuss our approach to integrative negotiation. Section 5 discusses the future work with some concluding remarks. 2. BACKGROUND 2.1 Multi-criteria decision making theory Let A denote the set of feasible alternatives available to a decision maker M. As an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1,..., m). (Attributes are also referred to as issues, or decision variables.) A typical decision maker also has several objectives X1,..., Xk. Thus, M wishes to maximise his objectives. However, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker's objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another. For instance, most businesses and public services have objectives like "minimise cost" and "maximise the quality of services." Since better services can often only be attained for a price, these objectives conflict. Due to the conflicting nature of a decision maker's objectives, M usually has to settle at a compromise solution. That is, he may have to choose an act a E A that does not optimise every objective. This is the topic of the multi-criteria decision making theory. 5. CONCLUSION AND FUTURE WORK In this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents' objectives and utilities. The focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or "create value." max λ1, λ2 ∈ L 514 The Sixth Intl. . Joint Conf. We have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents. Furthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome. We have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes. For simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution. In order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation. We have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes. For non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution. The approach presented in this paper is similar to the ideas behind negotiation analysis [18]. Ehtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations. The relation of their approach to our approach is discussed in the preceding section. Lai et al [12] provide an alternative approach to integrative negotiation. While their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear. Zhang et at [22] discuss the use of integrative negotiation in agent organisations. They assume that agents are honest. Their main result is an experiment showing that in some situations, agents' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation. Jonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator. Thus, their approach can be considered a complement of ours. Their experimental results show that agents can reach Paretooptimal outcomes using their approach. The details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done. There is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes. In order to provide a complete framework we are also working on the distributive phase using the mediator.
J-38
Multi-Attribute Coalitional Games
We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these games -- the Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature.
[ "multi-attribut coalit game", "coalit game", "cooper", "agent", "divers econom interact", "comput complex", "core", "shaplei valu", "graph", "multi-issu represent", "linear combin", "unrestrict aggreg of subgam", "polynomi function min-cost flow problem", "min-cost flow problem", "superaddit game", "coalit game theori", "multi-attribut model", "compact represent" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "U", "U", "U", "M", "U", "U", "M", "M", "R", "U" ]
Multi-Attribute Coalitional Games∗ Samuel Ieong † Computer Science Department Stanford University Stanford, CA 94305 sieong@cs.stanford.edu Yoav Shoham Computer Science Department Stanford University Stanford, CA 94305 shoham@cs.stanford.edu ABSTRACT We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these gamesthe Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics; F.2 [Analysis of Algorithms and Problem Complexity] General Terms Algorithms, Economics 1. INTRODUCTION When agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities. Consider the problem of forming a soccer team. For a team to be successful, a team needs some forwards, midfielders, defenders, and a goalkeeper. The relevant attributes of the players are their skills at playing each of the four positions. The value of a team depends on how well its players can play these positions. At a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players. Consider an example from the business world. Companies in the metals industry are usually vertically-integrated and diversified. They have mines for various types of ores, and also mills capable of processing and producing different kinds of metal. They optimize their production profile according to the market prices for their products. For example, when the price of aluminum goes up, they will allocate more resources to producing aluminum. However, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores. Two or more companies may benefit from trading ores and processing capacities with one another. To model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies. Given the exogenous input of market prices, the value of a group of companies will be determined by these attributes. Many real-world problems can be likewise modeled by picking the right attributes. As attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models. Coalitional game theory focuses on what groups of agents can achieve, and thus connects strongly with e-commerce, as the Internet economies have significantly enhanced the abilities of business to identify and capitalize on profitable opportunities of cooperation. Our goal is to understand the computational aspects of computing the solution concepts (stable and/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes. Our contributions can be summarized as follows: • We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature. We show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse. • Given the generality of the model, positive results carry over to other representations. We discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature. 170 • We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently. We provide an explicit bound on the maximum error of the estimate, and show that the bound is asymptotically tight. We also carry out experiments to evaluate how the heuristic performs on random instances.1 2. RELATED WORK Coalitional game theory has been well studied in economics [9, 10, 14]. A vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties. The first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5]. They consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players. This can be efficiently modeled and generalized using attributes. As a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7]. Both of these representations are based on dividing a coalitional game into subgames (termed issues in [3] and rules in [7]), and aggregating the subgames via linear combination. The key difference in our work is the unrestricted aggregation of subgames: the aggregation could be via a polynomial function of the attributes, or even by treating the subgames as input to another computational problem such as a min-cost flow problem. The relationship of these models will be made clear after we define the multiattribute representation in Section 4. Another representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2]. This representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem. While it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions. In this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions. We will therefore not make further comparisons with [2]. The model of coalitional games with attributes has been considered in the works of Shehory and Kraus. They model the agents as possessing capabilities that indicates their proficiencies in different areas, and consider how to efficiently allocate tasks [12] and the dynamics of coalition formation [13]. Our work differs significantly as our focus is on reasoning about solution concepts. Our model also covers a wider scope as attributes generalize the notion of capabilities. Yokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15]. There are two major differences between their work and ours. Firstly, Yokoo et al. assume that each skill is fundamentally different from another, hence no two agents may possess the same skill. Also, they focus on developing new solution concepts that are robust with respect to manipulation by agents. Our focus is on reasoning about traditional solution concepts. 1 We acknowledge that random instances may not be typical of what happens in practice, but given the generality of our model, it provides the most unbiased view. Our work is also related to the study of cooperative games with committee control [4]. In these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3). multiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games. We note that when restricted to simple games, we derive similar results to that in [4]. 3. PRELIMINARIES In this section, we will review the relevant concepts of coalitional game theory and its two most important solution concepts - the Shapley value and the core. We will then define the computational questions that will be studied in the second half of the paper. 3.1 Coalitional Games Throughout this paper, we assume that payoffs to groups of agents can be freely distributed among its members. This transferable utility assumption is commonly made in coalitional game theory. The canonical representation of a coalitional game with transferable utility is its characteristic form. Definition 1. A coalition game with transferable utility in characteristic form is denoted by the pair N, v , where • N is the set of agents; and • v : 2N → R is a function that maps each group of agents S ⊆ N to a real-valued payoff. A group of agents in a game is known as a coalition, and the entire set of agents is known as the grand coalition. An important class of coalitional games is the class of monotonic games. Definition 2. A coalitional game is monotonic if for all S ⊂ T ⊆ N, v(S) ≤ v(T). Another important class of coalitional games is the class of simple games. In a simple game, a coalition either wins, in which case it has a value of 1, or loses, in which case it has a value of 0. It is often used to model voting situations. Simple games are often assumed to be monotonic, i.e., if S wins, then for all T ⊇ S, T also wins. This coincides with the notion of using simple games as a model for voting. If a simple game is monotonic, then it is fully described by the set of minimal winning coalitions, i.e., coalitions S for which v(S) = 1 but for all coalitions T ⊂ S, v(T) = 0. An outcome in a coalitional game specifies the utilities the agents receive. A solution concept assigns to each coalitional game a set of reasonable outcomes. Different solution concepts attempt to capture in some way outcomes that are stable and/or fair. Two of the best known solution concepts are the Shapley value and the core. The Shapley value is a normative solution concept that prescribes a fair way to divide the gains from cooperation when the grand coalition is formed. The division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents. Formally, Definition 3. The Shapley value of agent i, φi(v), in game N, v is given by the following formula φi(v) = S⊆N\{i} |S|! (|N| − |S| − 1)! |N|! (v(S ∪ {i}) − v(S)) 171 The core is a descriptive solution concept that focuses on outcomes that are stable. Stability under core means that no set of players can jointly deviate to improve their payoffs. Definition 4. An outcome x ∈ R|N| is in the core of the game N, v if for all S ⊆ N, i∈S xi ≥ v(S) Note that the core of a game may be empty, i.e., there may not exist any payoff vector that satisfies the stability requirement for the given game. 3.2 Computational Problems We will study the following three problems related to solution concepts in coalitional games. Problem 1. (Shapley Value) Given a description of the coalitional game and an agent i, compute the Shapley value of agent i. Problem 2. (Core Membership) Given a description of the coalitional game and a payoff vector x such that È i∈N xi = v(N), determine if È i∈S xi ≥ v(S) for all S ⊆ N. Problem 3. (Core Non-emptiness) Given a description of the coalitional game, determine if there exists any payoff vector x such that È i∈S xi ≥ V (S) for all S ⊆ N, andÈ i∈N xi = v(N). Note that the complexity of the above problems depends on the how the game is described. All these problems will be easy if the game is described by its characteristic form, but only so because the description takes space exponential in the number of agents, and hence simple brute-force approach takes time polynomial to the input description. To properly understand the computational complexity questions, we have to look at compact representation. 4. FORMAL MODEL In this section, we will give a formal definition of multiattribute coalitional games, and show how it is related to some of the representations discussed in the literature. We will also discuss some limitations to our proposed approach. 4.1 Multi-Attribute Coalitional Games A multi-attribute coalitional game (MACG) consists of two parts: a description of the attributes of the agents, which we termed an attribute model, and a function that assigns values to combination of attributes. Together, they induce a coalitional game over the agents. We first define the attribute model. Definition 5. An attribute model is a tuple N, M, A , where • N denotes the set of agents, of size n; • M denotes the set of attributes, of size m; • A ∈ Rm×n , the attribute matrix, describes the values of the attributes of the agents, with Aij denoting the value of attribute i for agent j. We can directly define a function that maps combinations of attributes to real values. However, for many problems, we can describe the function more compactly by computing it in two steps: we first compute an aggregate value for each attribute, then compute the values of combination of attributes using only the aggregated information. Formally, Definition 6. An aggregating function (or aggregator) takes as input a row of the attribute matrix and a coalition S, and summarizes the attributes of the agents in S with a single number. We can treat it as a mapping from Rn × 2N → R. Aggregators often perform basic arithmetic or logical operations. For example, it may compute the sum of the attributes, or evaluate a Boolean expression by treating the agents i ∈ S as true and j /∈ S as false. Analogous to the notion of simple games, we call an aggregator simple if its range is {0, 1}. For any aggregator, there is a set of relevant agents, and a set of irrelevant agents. An agent i is irrelevant to aggregator aj if aj (S ∪ {i}) = aj (S) for all S ⊆ N. A relevant agent is one not irrelevant. Given the attribute matrix, an aggregator assigns a value to each coalition S ⊆ N. Thus, each aggregator defines a game over N. For aggregator aj , we refer to this induced game as the game of attribute j, and denote it with aj (A). When the attribute matrix is clear from the context, we may drop A and simply denote the game as aj . We may refer to the game as the aggregator when no ambiguities arise. We now define the second step of the computation with the help of aggregators. Definition 7. An aggregate value function takes as input the values of the aggregators and maps these to a real value. In this paper, we will focus on having one aggregator per attribute. Therefore, in what follows, we will refer to the aggregate value function as a function over the attributes. Note that when all aggregators are simple, the aggregate value function implicitly defines a game over the attributes, as it assigns a value to each set of attributes T ⊆ M. We refer to this as the game among attributes. We now define multi-attribute coalitional game. Definition 8. A multi-attribute coalitional game is defined by the tuple N, M, A, a, w , where • N, M, A is an attribute model; • a is a set of aggregators, one for each attribute; we can treat the set together as a vector function, mapping Rm×n × 2N → Rm • w : Rm → R is an aggregate value function. This induces a coalitional game with transferable payoffs N, v with players N and the value function defined by v(S) = w(a(A, S)) Note that MACG as defined is fully capable of representing any coalitional game N, v . We can simply take the set of attributes as equal to the set of agents, i.e., M = N, an identity matrix for A, aggregators of sums, and the aggregate value function w to be v. 172 4.2 An Example Let us illustrate how MACG can be used to represent a game with a simple example. Suppose there are four types of resources in the world: gold, silver, copper, and iron, that each agent is endowed with some amount of these resources, and there is a fixed price for each of the resources in the market. This game can be described using MACG with an attribute matrix A, where Aij denote the amount of resource i that agent j is endowed. For each resource, the aggregator sums together the amount of resources the agents have. Finally, the aggregate value function takes the dot product between the market price vector and the aggregate vector. Note the inherent flexibility in the model: only limited work would be required to update the game as the market price changes, or when a new agent arrives. 4.3 Relationship with Other Representations As briefly discussed in Section 2, MACG is closely related to two other representations in the literature, the multiissue representation of Conitzer and Sandholm [3], and our work on marginal contribution nets [7]. To make their relationships clear, we first review these two representations. We have changed the notations from the original papers to highlight their similarities. Definition 9. A multi-issue representation is given as a vector of coalitional games, (v1, v2, ... vm), each possibly with a varying set of agents, say N1, ... , Nm. The coalitional game N, v induced by multi-issue representation has player set N = Ëm i=1 Ni, and for each coalition S ⊆ N, v(S) = Èm i=1 v(S ∩ Ni). The games vi are assumed to be represented in characteristic form. Definition 10. A marginal contribution net is given as a set of rules (r1, r2, ... , rm), where rule ri has a weight wi, and a pattern pi that is a conjunction over literals (positive or negative). The agents are represented as literals. A coalition S is said to satisfy the pattern pi, if we treat the agents i ∈ S as true, an agent j /∈ S as false, pi(S) evaluates to true. Denote the set of literals involved in rule i by Ni. The coalitional game N, v induced by a marginal contribution net has player set N = Ëm i=1 Ni, and for each coalition S ⊆ N, v(S) = È i:pi(S)=true wi. From these definitions, we can see the relationships among these three representations clearly. An issue of a multi-issue representation corresponds to an attribute in MACG. Similarly, a rule of a marginal contribution net corresponds to an attribute in MACG. The aggregate value functions are simple sums and weighted sums for the respective representations. Therefore, it is clear that MACG will be no less succinct than either representation. However, MACG differs in two important way. Firstly, there is no restriction on the operations performed by the aggregate value function over the attributes. This is an important generalization over the linear combination of issues or rules in the other two approaches. In particular, there are games for which MACG can be exponentially more compact. The proof of the following proposition can be found in the Appendix. Proposition 1. Consider the parity game N, v where coalition S ⊆ N has value v(S) = 1 if |S| is odd, and v(S) = 0 otherwise. MACG can represent the game in O(n) space. Both multi-issue representation and marginal contribution nets requires O(2n ) space. A second important difference of MACG is that the attribute model and the value function is cleanly separated. As suggested in the example in Section 4.2, this often allows us more efficient update of the values of the game as it changes. Also, the same attribute model can be evaluated using different value functions, and the same value function can be used to evaluate different attribute model. Therefore, MACG is very suitable for representing multiple games. We believe the problems of updating games and representing multiple games are interesting future directions to explore. 4.4 Limitation of One Aggregator per Attribute Before focusing on one aggregator per attribute for the rest of the paper, it is natural to wonder if any is lost per such restriction. The unfortunate answer is yes, best illustrated by the following. Consider again the problem of forming a soccer team discussed in the introduction, where we model the attributes of the agents as their ability to take the four positions of the field, and the value of a team depends on the positions covered. If we first aggregate each of the attribute individually, we will lose the distributional information of the attributes. In other words, we will not be able to distinguish between two teams, one of which has a player for each position, the other has one player who can play all positions, but the rest can only play the same one position. This loss of distributional information can be recovered by using aggregators that take as input multiple rows of the attribute matrix rather than just a single row. Alternatively, if we leave such attributes untouched, we can leave the burden of correctly evaluating these attributes to the aggregate value function. However, for many problems that we found in the literature, such as the transportation domain of [12] and the flow game setting of [4], the distribution of attributes does not affect the value of the coalitions. In addition, the problem may become unmanageably complex as we introduce more complicated aggregators. Therefore, we will focus on the representation as defined in Definition 8. 5. SHAPLEY VALUE In this section, we focus on computational issues of finding the Shapley value of a player in MACG. We first set up the problem with the use of oracles to avoid complexities arising from the aggregators. We then show that when attributes are linearly separable, the Shapley value can be efficiently computed. This generalizes the proofs of related results in the literature. For the non-linearly separable case, we consider a natural heuristic for estimating the Shapley value, and study the heuristic theoretically and empirically. 5.1 Problem Setup We start by noting that computing the Shapley value for simple aggregators can be hard in general. In particular, we can define aggregators to compute weighted majority over its input set of agents. As noted in [6], finding the Shapley value of a weighted majority game is #P-hard. Therefore, discussion of complexity of Shapley value for MACG with unrestricted aggregators is moot. Instead of placing explicit restriction on the aggregator, we assume that the Shapley value of the aggregator can be 173 answered by an oracle. For notation, let φi(u) denote the Shapley value for some game u. We make the following assumption: Assumption 1. For each aggregator aj in a MACG, there is an associated oracle that answers the Shapley value of the game of attribute j. In other words, φi(aj ) is known. For many aggregators that perform basic operations over its input, polynomial time oracle for Shapley value exists. This include operations such as sum, and symmetric functions when the attributes are restricted to {0, 1}. Also, when only few agents have an effect on the aggregator, brute-force computation for Shapley value is feasible. Therefore, the above assumption is reasonable for many settings. In any case, such abstraction allows us to focus on the aggregate value function. 5.2 Linearly Separable Attributes When the aggregate value function can be written as a linear function of the attributes, the Shapley value of the game can be efficiently computed. Theorem 1. Given a game N, v represented as a MACG N, M, A, a, w , if the aggregate value function can be written as a linear function of its attributes, i.e., w(a(A, S)) = m j=1 cj aj (A, S) The Shapley value of agent i in N, v is given by φi(v) = m j=1 cj φi(aj ) (1) Proof. First, we note that Shapley value satisfies an additivity axiom [11]. The Shapley value satisfies additivity, namely, φi(a + b) = φi(a) + φi(b), where N, a + b is a game defined to be (a + b)(S) = a(S) + b(S) for all S ⊆ N. It is also clear that Shapley value satisfies scaling, namely φi(αv) = αφi(v) where (αv)(S) = αv(S) for all S ⊆ N. Since the aggregate value function can be expressed as a weighted sum of games of attributes, φi(v) = φi(w(a)) = φi( m j=1 cjaj ) = m j=1 cjφi(aj ) Many positive results regarding efficient computation of Shapley value in the literature depends on some form of linearity. Examples include the edge-spanning game on graphs by Deng and Papadimitriou [5], the multi-issue representation of [3], and the marginal contribution nets of [7]. The key to determine if the Shapley value can be efficiently computed depends on the linear separability of attributes. Once this is satisfied, as long as the Shapley value of the game of attributes can be efficiently determined, the Shapley value of the entire game can be efficiently computed. Corollary 1. The Shapley value for the edge-spanning game of [5], games in multi-issue representation [3], and games in marginal contribution nets [7], can be computed in polynomial time. 5.3 Polynomial Combination of Attributes When the aggregate value function cannot be expressed as a linear function of its attributes, computing the Shapley value exactly is difficult. Here, we will focus on aggregate value function that can be expressed as some polynomial of its attributes. If we do not place a limit on the degree of the polynomial, and the game N, v is not necessarily monotonic, the problem is #P-hard. Theorem 2. Computing the Shapley value of a MACG N, M, A, a, w , when w can be an arbitrary polynomial of the aggregates a, is #P-hard, even when the Shapley value of each aggregator can be efficiently computed. The proof is via reduction from three-dimensional matching, and details can be found in the Appendix. Even if we restrict ourselves to monotonic games, and non-negative coefficients for the polynomial aggregate value function, computing the exact Shapley value can still be hard. For example, suppose there are two attributes. All agents in some set B ⊆ N possess the first attribute, and all agents in some set C ⊆ N possess the second, and B and C are disjoint. For a coalition S ⊆ N, the aggregator for the first evaluates to 1 if and only if |S ∩ B| ≥ b , and similarly, the aggregator for the second evaluates to 1 if and only if |S ∩ C| ≥ c . Let the cardinality of the sets B and C be b and c. We can verify that the Shapley value of an agent i in B equals φi = 1 b b −1 i=0 b i ¡ c c −1 ¡ b+c c +i−1 ¡ c − c + 1 b + c − c − i + 1 The equation corresponds to a weighted sum of probability values of hypergeometric random variables. The correspondence with hypergeometric distribution is due to sampling without replacement nature of Shapley value. As far as we know, there is no close-form formula to evaluate the sum above. In addition, as the number of attributes involved increases, we move to multi-variate hypergeometric random variables, and the number of summands grow exponentially in the number of attributes. Therefore, it is highly unlikely that the exact Shapley value can be determined efficiently. Therefore, we look for approximation. 5.3.1 Approximation First, we need a criteria for evaluating how well an estimate, ˆφ, approximates the true Shapley value, φ. We consider the following three natural criteria: • Maximum underestimate: maxi φi/ˆφi • Maximum overestimate: maxi ˆφi/φi • Total variation: 1 2 È i |φi − ˆφi|, or alternatively maxS | È i∈S φi − È i∈S ˆφi| The total variation criterion is more meaningful when we normalize the game to having a value of 1 for the grand coalition, i.e., v(N) = 1. We can also define additive analogues of the under- and overestimates, especially when the games are normalized. 174 We will assume for now that the aggregate value function is a polynomial over the attributes with non-negative coefficients. We will also assume that the aggregators are simple. We will evaluate a specific heuristic that is analogous to Equation (1). Suppose the aggregate function can be written as a polynomial with p terms w(a(A, S)) = p j=1 cj aj(1) (A, S)aj(2) (A, S) · · · aj(kj ) (A, S) (2) For term j, the coefficient of the term is cj , its degree kj , and the attributes involved in the term are j(1), ... , j(kj ). We compute an estimate ˆφ to the Shapley value as ˆφi = p j=1 kj l=1 cj kj φi(aj(l) ) (3) The idea behind the estimate is that for each term, we divide the value of the term equally among all its attributes. This is represented by the factor cj kj . Then for for each attribute of an agent, we assign the player a share of value from the attribute. This share is determined by the Shapley value of the simple game of that attribute. Without considering the details of the simple games, this constitutes a fair (but blind) rule of sharing. 5.3.2 Theoretical analysis of heuristic We can derive a simple and tight bound for the maximum (multiplicative) underestimate of the heuristic estimate. Theorem 3. Given a game N, v represented as a MACG N, M, A, a, w , suppose w can be expressed as a polynomial function of its attributes (cf Equation (2)). Let K = maxjkj, i.e., the maximum degree of the polynomial. Let ˆφ denote the estimated Shapley value using Equation (3), and φ denote the true Shapley value. For all i ∈ N, φi ≥ K ˆφi. Proof. We bound the maximum underestimate term-byterm. Let tj be the j-th term of the polynomial. We note that the term can be treated as a game among attributes, as it assigns a value to each coalition S ⊆ N. Without loss of generality, renumber attributes j(1) through j(kj ) as 1 through kj. tj (S) = cj kj l=1 al (A, S) To make the equations less cluttered, let B(N, S) = |S|! (|N| − |S| − 1)! |N|! and for a game a, contribution of agent i to group S : i /∈ S, ∆i(a, S) = a(S ∪ {i}) − a(S) The true Shapley value of the game tj is φi(tj) = cj S⊆N\{i} B(N, S)∆i(tj, S) For each coalition S, i /∈ S, ∆i(tj , S) = 1 if and only if for at least one attribute, say l∗ , ∆i(al∗ , S) = 1. Therefore, if we sum over all the attributes, we would have included l∗ for sure. φi(tj) ≤ cj kj j=1 S⊆N\{i} B(N, S)∆i(aj , S) = kj kj j=1 cj kj φi(aj ) = kj ˆφi(T) Summing over the terms, we see that the worst case underestimate is by the maximum degree. Without loss of generality, since the bound is multiplicative, we can normalize the game to having v(N) = 1. As a corollary, because we cannot overestimate any set by more than K times, we obtain a bound on the total variation: Corollary 2. The total variation between the estimated Shapley value and the true Shapley value, for K-degree bounded polynomial aggregate value function, is K−1 K . We can show that this bound is tight. Example 1. Consider a game with n players and K attributes. Let the first (n−1) agents be a member of the first (K − 1) attributes, and that the corresponding aggregator returns 1 if any one of the first (K − 1) agents is present. Let the n-th agent be the sole member of the K-th attribute. The estimated Shapley will assign a value of K−1 K 1 n−1 to the first (n − 1) agents and 1 K to the n-th agent. However, the true Shapley value of the n-th agent tends to 1 as n → ∞, and the total variation approaches K−1 K . In general, we cannot bound how much ˆφ may overestimate the true Shapley value. The problem is that ˆφi may be non-zero for agent i even though may have no influence over the outcome of a game when attributes are multiplied together, as illustrated by the following example. Example 2. Consider a game with 2 players and 2 attributes, and let the first agent be a member of both attributes, and the other agent a member of the second attribute. For a coalition S, the first aggregator evaluates to 1 if agent 1 ∈ S, and the second aggregator evaluates to 1 if both agents are in S. While agent 2 is not a dummy with respect to the second attribute, it is a dummy with respect to the product of the attributes. Agent 2 will be assigned a value of 1 4 by the estimate. As mentioned, for simple monotonic games, a game is fully described by its set of minimal winning coalitions. When the simple aggregators are represented as such, it is possible to check, in polynomial time, for agents turning dummies after attributes are multiplied together. Therefore, we can improve the heuristic estimate in this special case. 5.3.3 Empirical evaluation Due to a lack of benchmark problems for coalitional games, we have tested the heuristic on random instances. We believe more meaningful results can be obtained when we have real instances to test this heuristic on. Our experiment is set up as follows. We control three parameters of the experiment: the number of players (6 − 10), 175 0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 6 7 8 9 10 No. of Players TotalVariationDistance 2 3 4 5 (a) Effect of Max Degree 0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 6 7 8 9 10 No. of Players TotalVariationDistance 4 5 6 (b) Effect of Number of Attributes Figure 1: Experimental results the number of attributes (3 − 8), and the maximum degree of the polynomial (2 − 5). For each attribute, we randomly sample one to three minimal winning coalitions. We then randomly generate a polynomial of the desired maximum degree with a random number (3 − 12) of terms, each with a random positive weight. We normalize each game to have v(N) = 1. The results of the experiments are shown in Figure 1. The y-axis of the graphs shows the total variation, and the x-axis the number of players. Each datapoint is an average of approximately 700 random samples. Figure 1(a) explores the effect of the maximum degree and the number of players when the number of attributes is fixed (at six). As expected, the total variation increases as the maximum degree increases. On the other hand, there is only a very small increase in error as the number of players increases. The error is nowhere near the theoretical worstcase bound of 1 2 to 4 5 for polynomials of degrees 2 to 5. Figure 1(b) explores the effect of the number of attributes and the number of players when the maximum degree of the polynomial is fixed (at three). We first note that these three lines are quite tightly clustered together, suggesting that the number of attributes has relatively little effect on the error of the estimate. As the number of attributes increases, the total variation decreases. We think this is an interesting phenomenon. This is probably due to the precise construct required for the worst-case bound, and so as more attributes are available, we have more diverse terms in the polynomial, and the diversity pushes away from the worst-case bound. 6. CORE-RELATED QUESTIONS In this section, we look at the complexity of the two computational problems related to the core: Core Nonemptiness and Core Membership. We show that the nonemptiness of core of the game among attributes and the cores of the aggregators imply non-emptiness of the core of the game induced by the MACG. We also show that there appears to be no such general relationship that relates the core memberships of the game among attributes, games of attributes, and game induced by MACG. 6.1 Problem Setup There are many problems in the literature for which the questions of Core Non-emptiness and Core Membership are known to be hard [1]. For example, for the edgespanning game that Deng and Papadimitriou studied [5], both of these questions are coNP-complete. As MACG can model the edge-spanning game in the same amount of space, these hardness results hold for MACG as well. As in the case for computing Shapley value, we attempt to find a way around the hardness barrier by assuming the existence of oracles, and try to build algorithms with these oracles. First, we consider the aggregate value function. Assumption 2. For a MACG N, M, A, a, w , we assume there are oracles that answers the questions of Core Nonemptiness, and Core Membership for the aggregate value function w. When the aggregate value function is a non-negative linear function of its attributes, the core is always non-empty, and core membership can be determined efficiently. The concept of core for the game among attributes makes the most sense when the aggregators are simple games. We will further assume that these simple games are monotonic. Assumption 3. For a MACG N, M, A, a, w , we assume all aggregators are monotonic and simple. We also assume there are oracles that answers the questions of Core Nonemptiness, and Core Membership for the aggregators. We consider this a mild assumption. Recall that monotonic simple games are fully described by their set of minimal winning coalitions (cf Section 3). If the aggregators are represented as such, Core Non-emptiness and Core Membership can be checked in polynomial time. This is due to the following well-known result regarding simple games: Lemma 1. A simple game N, v has a non-empty core if and only if it has a set of veto players, say V , such that v(S) = 0 for all S ⊇ V . Further, A payoff vector x is in the core if and only if xi = 0 for all i /∈ V . 6.2 Core Non-emptiness There is a strong connection between the non-emptiness of the cores of the games among attributes, games of the attributes, and the game induced by a MACG. Theorem 4. Given a game N, v represented as a MACG N, M, A, a, w , if the core of the game among attributes, 176 M, w , is non-empty, and the cores of the games of attributes are non-empty, then the core of N, v is non-empty. Proof. Let u be an arbitrary payoff vector in the core of the game among attributes, M, w . For each attribute j, let θj be an arbitrary payoff vector in the core of the game of attribute j. By Lemma 1, each attribute j must have a set of veto players; let this set be denoted by Pj . For each agent i ∈ N, let yi = È j ujθj i . We claim that this vector y is in the core of N, v . Consider any coalition S ⊆ N, v(S) = w(a(A, S)) ≤ j:S⊇P j uj (4) This is true because an aggregator cannot evaluate to 1 without all members of the veto set. For any attribute j, by Lemma 1, È i∈P j θj i = 1. Therefore, j:S⊇P j uj = j:S⊇P j uj i∈P j θj i = i∈S j:S⊇P j ujθj i ≤ i∈S yi Note that the proof is constructive, and hence if we are given an element in the core of the game among attributes, we can construct an element of the core of the coalitional game. From Theorem 4, we can obtain the following corollaries that have been previously shown in the literature. Corollary 3. The core of the edge-spanning game of [5] is non-empty when the edge weights are non-negative. Proof. Let the players be the vertices, and their attributes the edges incident on them. For each attribute, there is a veto set - namely, both endpoints of the edges. As previously observed, an aggregate value function that is a non-negative linear function of its aggregates has non-empty core. Therefore, the precondition of Theorem 4 is satisfied, and the edge-spanning game with non-negative edge weights has a non-empty core. Corollary 4 (Theorem 1 of [4]). The core of a flow game with committee control, where each edge is controlled by a simple game with a veto set of players, is non-empty. Proof. We treat each edge of the flow game as an attribute, and so each attribute has a veto set of players. The core of a flow game (without committee) has been shown to be non-empty in [8]. We can again invoke Theorem 4 to show the non-emptiness of core for flow games with committee control. However, the core of the game induced by a MACG may be non-empty even when the core of the game among attributes is empty, as illustrated by the following example. Example 3. Suppose the minimal winning coalition of all aggregators in a MACG N, M, A, a, w is N, then v(S) = 0 for all coalitions S ⊂ N. As long as v(N) ≥ 0, any nonnegative vector x that satisfies È i∈N xi = v(N) is in the core of N, v . Complementary to the example above, when all the aggregators have empty cores, the core of N, v is also empty. Theorem 5. Given a game N, v represented as a MACG N, M, A, a, w , if the cores of all aggregators are empty, v(N) > 0, and for each i ∈ N, v({i}) ≥ 0, then the core of N, v is empty. Proof. Suppose the core of N, v is non-empty. Let x be a member of the core, and pick an agent i such that xi > 0. However, for each attribute, since the core is empty, by Lemma 1, there are at least two disjoint winning coalitions. Pick the winning coalition Sj that does not include i for each attribute j. Let S∗ = Ë j Sj . Because S∗ is winning for all coalitions, v(S∗ ) = v(N). However, v(N) = j∈N xj = xi + j /∈N xj ≥ xi + j∈S∗ xj > j∈S∗ xj Therefore, v(S∗ ) > È j∈S∗ xj, contradicting the fact that x is in the core of N, v . We do not have general results regarding the problem of Core Non-emptiness when some of the aggregators have non-empty cores while others have empty cores. We suspect knowledge about the status of the cores of the aggregators alone is insufficient to decide this problem. 6.3 Core Membership Since it is possible for the game induced by the MACG to have a non-empty core when the core of the aggregate value function is empty (Example 3), we try to explore the problem of Core Membership assuming that the core of both the game among attributes, M, w , and the underlying game, N, v , is known to be non-empty, and see if there is any relationship between their members. One reasonable requirement is whether a payoff vector x in the core of N, v can be decomposed and re-aggregated to a payoff vector y in the core of M, w . Formally, Definition 11. We say that a vector x ∈ Rn ≥0 can be decomposed and re-aggregated into a vector y ∈ Rm ≥0 if there exists Z ∈ Rm×n ≥0 , such that yi = n j=1 Zij for all i xj = m i=1 Zij for all j We may refer Z as shares. When there is no restriction on the entries of Z, it is always possible to decompose a payoff vector x in the core of N, v to a payoff vector y in the core of M, w . However, it seems reasonable to restrict that if an agent j is irrelevant to the aggregator i, i.e., i never changes the outcome of aggregator j, then Zij should be restricted to be 0. Unfortunately, this restriction is already too strong. Example 4. Consider a MACG N, M, A, a, w with two players and three attributes. Suppose agent 1 is irrelevant to attribute 1, and agent 2 is irrelevant to attributes 2 and 3. For any set of attributes T ⊆ M, let w be defined as w(T) = 0 if |T| = 0 or 1 6 if |T| = 2 10 if |T| = 3 177 Since the core of a game with a finite number of players forms a polytope, we can verify that the set of vectors (4, 4, 2), (4, 2, 4), and (2, 4, 4), fully characterize the core C of M, w . On the other hand, the vector (10, 0) is in the core of N, v . This vector cannot be decomposed and re-aggregated to a vector in C under the stated restriction. Because of the apparent lack of relationship among members of the core of N, v and that of M, w , we believe an algorithm for testing Core Membership will require more input than just the veto sets of the aggregators and the oracle of Core Membership for the aggregate value function. 7. CONCLUDING REMARKS Multi-attribute coalitional games constitute a very natural way of modeling problems of interest. Its space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems. Positive results obtained under this representation can easily be translated to results about other representations. Some of these corollary results have been discussed in Sections 5 and 6. An important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch. As pointed out at the end of Section 4.3, MACG is very naturally suited for updates. Representation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting. Our work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well. Given the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees. 8. REFERENCES [1] J. M. Bilbao, J. R. Fern´andez, and J. J. L´opez. Complexity in cooperative game theory. http://www.esi.us.es/~mbilbao. [2] V. Conitzer and T. Sandholm. Complexity of determining nonemptiness of the core. In Proc. 18th Int. Joint Conf. on Artificial Intelligence, pages 613-618, 2003. [3] V. Conitzer and T. Sandholm. Computing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains. In Proc. 19th Nat. Conf. on Artificial Intelligence, pages 219-225, 2004. [4] I. J. Curiel, J. J. Derks, and S. H. Tijs. On balanced games and games with committee control. OR Spectrum, 11:83-88, 1989. [5] X. Deng and C. H. Papadimitriou. On the complexity of cooperative solution concepts. Math. Oper. Res., 19:257-266, May 1994. [6] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, New York, 1979. [7] S. Ieong and Y. Shoham. Marginal contribution nets: A compact representation scheme for coalitional games. In Proc. 6th ACM Conf. on Electronic Commerce, pages 193-202, 2005. [8] E. Kalai and E. Zemel. Totally balanced games and games of flow. Math. Oper. Res., 7:476-478, 1982. [9] A. Mas-Colell, M. D. Whinston, and J. R. Green. Microeconomic Theory. Oxford University Press, New York, 1995. [10] M. J. Osborne and A. Rubinstein. A Course in Game Theory. The MIT Press, Cambridge, Massachusetts, 1994. [11] L. S. Shapley. A value for n-person games. In H. W. Kuhn and A. W. Tucker, editors, Contributions to the Theory of Games II, number 28 in Annals of Mathematical Studies, pages 307-317. Princeton University Press, 1953. [12] O. Shehory and S. Kraus. Task allocation via coalition formation among autonomous agents. In Proc. 14th Int. Joint Conf. on Artificial Intelligence, pages 31-45, 1995. [13] O. Shehory and S. Kraus. A kernel-oriented model for autonomous-agent coalition-formation in general environments: Implentation and results. In Proc. 13th Nat. Conf. on Artificial Intelligence, pages 134-140, 1996. [14] J. von Neumann and O. Morgenstern. Theory of Games and Economic Behvaior. Princeton University Press, 1953. [15] M. Yokoo, V. Conitzer, T. Sandholm, N. Ohta, and A. Iwasaki. Coalitional games in open anonymous environments. In Proc. 20th Nat. Conf. on Artificial Intelligence, pages 509-515, 2005. Appendix We complete the missing proofs from the main text here. To prove Proposition 1, we need the following lemma. Lemma 2. Marginal contribution nets when all coalitions are restricted to have values 0 or 1 have the same representation power as an AND/OR circuit with negation at the literal level (i.e., AC0 circuit) of depth two. Proof. If a rule assigns a negative value in the marginal contribution nets, we can write the rule by a corresponding set of at most n rules, where n is the number of agents, such that each of which has positive values through application of De Morgans Law. With all values of the rules non-negative, we can treat the weighted summation step of marginal contribution nets can be viewed as an OR, and each rule as a conjunction over literals, possibly negated. This exactly match up with an AND/OR circuit of depth two. Proof (Proposition 1). The parity game can be represented with a MACG using a single attribute, aggregator of sum, and an aggregate value function that evaluates that sum modulus two. As a Boolean function, parity is known to require an exponential number of prime implicants. By Lemma 2, a prime implicant is the exact analogue of a pattern in a rule of marginal contribution nets. Therefore, to represent the parity function, a marginal contribution nets must be an exponential number of rules. Finally, as shown in [7], a marginal contribution net is at worst a factor of O(n) less compact than multi-issue representation. Therefore, multi-issue representation will also 178 take exponential space to represent the parity game. This is assuming that each issue in the game is represented in characteristic form. Proof (Theorem 2). An instance of three-dimensional matching is as follows [6]: Given set P ⊆ W × X × Y , where W , X, Y are disjoint sets having the same number q of elements, does there exist a matching P ⊆ P such that |P | = q and no two elements of P agree in any coordinate. For notation, let P = {p1, p2, ... , pK}. We construct a MACG N, M, A, a, w as follows: • M: Let attributes 1 to q correspond to elements in W , (q+1) to 2q correspond to elements in X, (2q+1) to 3q corresponds to element in Y , and let there be a special attribute (3q + 1). • N: Let player i corresponds to pi, and let there be a special player . • A: Let Aji = 1 if the element corresponding to attribute j is in pi. Thus, for the first K columns, there are exactly three non-zero entries. We also set A(3q+1) = 1. • a: for each aggregator j, aj (A(S)) = 1 if and only if sum of row j of A(S) equals 1. • w: product over all aj . In the game N, v that corresponds to this construction, v(S) = 1 if and only if all attributes are covered exactly once. Therefore, for /∈ T ⊆ N, v(T ∪ { }) − v(T) = 1 if and only if T covers attributes 1 to 3q exactly once. Since all such T, if exists, must be of size q, the number of threedimensional matchings is given by φ (v) (K + 1)! q! (K − q)! 179
I-57
Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System
In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence.
[ "multi-dimension trust", "reput system", "correl", "dirichlet distribut", "rumour propag", "anonym", "overconfid", "trust model", "heurist", "probabl theori", "data fusion", "doubl count", "rumour propog" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "U", "M", "U", "U", "M" ]
Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System Steven Reece1 , Alex Rogers2 , Stephen Roberts1 and Nicholas R. Jennings2 1 Department of Engineering Science, University of Oxford, Oxford, OX1 3PJ, UK. {reece,sjrob}@robots. ox.ac.uk 2 Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ, UK. {acr,nrj}@ecs. soton.ac.uk ABSTRACT In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent agents General Terms Algorithms, Design, Theory 1. INTRODUCTION The role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest. In such systems, agents must typically choose between interaction partners, and in this context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments. To date, however, much of the work within this area has used domain specific or ad-hoc trust metrics, and has focused on providing heuristics to evaluate and update these metrics using direct experience and reputation reports from other agents (see [8] for a review). Recent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11]. This approach allows many of the desiderata of computational trust models to be addressed through principled means. In particular: (i) it allows agents to update their estimates of the trustworthiness of a supplier as they acquire direct experience, (ii) it provides a natural framework for agents to express their uncertainty this trustworthiness, and, (iii) it allows agents to exchange, combine and filter reputation reports received from other agents. Whilst this approach is attractive, it is somewhat limited in that it has so far only considered single dimensional outcomes (i.e. whether the contract has succeeded or failed in its entirety). However, in many real world settings the success or failure of an interaction may be decomposed into several dimensions [7]. This presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made. Furthermore, these dimensions will typically also exhibit correlations. For example, a contract within a supply chain may specify criteria for timeliness, quality and quantity. A supplier who is suffering delays may attempt a trade-off between these dimensions by supplying the full amount late, or supplying as much as possible (but less than the quantity specified within the contract) on time. Thus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension. To date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad-hoc models do exist - see section 2 for more details). To rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. The starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier. Here we use standard approaches from the literature of data fusion (since this is a well developed field where the notion of multi-dimensional correlated estimates is well established1 ) to show that this naturally leads to a trust model where the agent must estimate probabilities and correlations over 1 In this context, the multiple dimensions typically represent the physical coordinates of a target being tracked, and correlations arise through the operation and orientation of sensors. 1070 978-81-904262-7-5 (RPS) c 2007 IFAAMAS multiple dimensions. Building upon this, we then devise a novel trust model that addresses the three desiderata discussed above. In more detail, in this paper we extend the state of the art in four key ways: 1. We devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates. 2. We present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above. We then benchmark this solution and show that it leads to good estimates. 3. We show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another. The sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information. 4. We show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems. Thus, we present a novel solution based upon the idea of private and shared information. We show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence. The remainder of this paper is organised as follows: in section 2 we review related work. In section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4. In sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems. We conclude in section 7. 2. RELATED WORK The need for a multi-dimensional trust model has been recognised by a number of researchers. Sabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts. They provide heuristic approaches to combining these impressions to form a measure they call subjective reputation. Likewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4]. In his case, each dimension is scored as a real number that represents a comparative value with no strong semantic meaning. He develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers. Whilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented. Gujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5]. They define multidimensional goal requirements, and evaluate an expected payoff based on a suppliers estimated behaviour. These estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty. Nevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one. By contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension. Jøsang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6]. The beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty. Moreover, they provide a number of extensions to this initial model including an approach to exchanging reputation reports using the sufficient statistics of the beta distribution, methods to discount the opinions of agents who themselves have low reputation ratings, and techniques to deal with reputations that may change over time. Likewise, Teacy et al. use the beta distribution to describe an agents belief in the probability that another agent will successfully fulfill its commitments [11]. They present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents. They provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates. Thus, they present a principled approach to detecting and removing inconsistent reports. Our work builds upon these more principled approaches. However, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract. We show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract. In the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per Jøsang and Ismail and Teacy et al.). However, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agents belief over multiple dimensions. 3. SINGLE-DIMENSIONAL TRUST Before presenting our multi-dimensional trust model, we first introduce the notation and formalism that we will use by describing the more familiar single dimensional case. We consider an agent who must decide whether to engage in a future contract with a supplier. This contract will lead to some outcome, o, and we consider that o = 1 if the contract is successfully fulfilled, and o = 0 if not2 . In order for the agent to make a rational decision, it should consider the utility that it will derive from this contract. We assume that in the case that the contract is successfully fulfilled, the agent derives a utility u(o = 1), otherwise it receives no utility3 . Now, given that the agent is uncertain of the reliability with which the supplier will fulfill the contract, it should consider the expected utility that it will derive, E[U], and this is given by: E[U] = p(o = 1)u(o = 1) (1) where p(o = 1) is the probability that the supplier will successfully fulfill the contract. However, whilst u(o = 1) is known by the agent, p(o = 1) is not. The best the agent can do is to determine a distribution over possible values of p(o = 1) given its direct experience of previous contract outcomes. Given that it has been able to do so, it can then determine an estimate of the expected utility4 of the contract, E[E[U]], and a measure of its uncertainty in this expected utility, Var(E[U]). This uncertainty is important since a risk averse agent may make a decision regarding a contract, 2 Note that we only consider binary contract outcomes, although extending this to partial outcomes is part of our future work. 3 Clearly this can be extended to the case where some utility is derived from an unsuccessful outcome. 4 Note that this is often called the expected expected utility, and this is the notation that we adopt here [2]. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1071 not only on its estimate of the expected utility of the contract, but also on the probability that the expected utility will exceed some minimum amount. These two properties are given by: E[E[U]] = ˆp(o = 1)u(o = 1) (2) Var(E[U]) = Var(p(o = 1))u(o = 1)2 (3) where ˆp(o = 1) and Var(p(o = 1)) are the estimate and uncertainty of the probability that a contract will be successfully fulfilled, and are calculated from the distribution over possible values of p(o = 1) that the agent determines from its direct experience. The utility based approach that we present here provides an attractive motivation for this model of Teacy et al. [11]. Now, in the case of binary contract outcomes, the beta distribution is the natural choice to represent the distribution over possible values of p(o = 1) since within Bayesian statistics this well known to be the conjugate prior for binomial observations [3]. By adopting the beta distribution, we can calculate ˆp(o = 1) and Var(p(o = 1)) using standard results, and thus, if an agent observed N previous contracts of which n were successfully fulfilled, then: ˆp(o = 1) = n + 1 N + 2 and: Var(p(o = 1)) = (n + 1)(N − n + 1) (N + 2)2(N + 3) Note that as expected, the greater the number of contracts the agent observes, the smaller the variance term Var(p(o = 1)), and, thus, the less the uncertainty regarding the probability that a contract will be successfully fulfilled, ˆp(o = 1). 4. MULTI-DIMENSIONAL TRUST We now extend the description above, to consider contracts between suppliers and agents that are represented by multiple dimensions, and hence the success or failure of a contract can be decomposed into the success or failure of each separate dimension. Consider again the example of the supply chain that specifies the timeliness, quantity, and quality of the goods that are to be delivered. Thus, within our trust model oa = 1 now indicates a successful outcome over dimension a of the contract and oa = 0 indicates an unsuccessful one. A contract outcome, X, is now composed of a vector of individual contract part outcomes (e.g. X = {oa = 1, ob = 0, oc = 0, ...}). Given a multi-dimensional contract whose outcome is described by the vector X, we again consider that in order for an agent to make a rational decision, it should consider the utility that it will derive from this contract. To this end, we can make the general statement that the expected utility of a contract is given by: E[U] = p(X)U(X)T (4) where p(X) is a joint probability distribution over all possible contract outcomes: p(X) = ⎛ ⎜ ⎜ ⎜ ⎝ p(oa = 1, ob = 0, oc = 0, ...) p(oa = 1, ob = 1, oc = 0, ...) p(oa = 0, ob = 1, oc = 0, ...) ... ⎞ ⎟ ⎟ ⎟ ⎠ (5) and U(X) is the utility derived from these possible outcomes: U(X) = ⎛ ⎜ ⎜ ⎜ ⎝ u(oa = 1, ob = 0, oc = 0, ...) u(oa = 1, ob = 1, oc = 0, ...) u(oa = 0, ob = 1, oc = 0, ...) ... ⎞ ⎟ ⎟ ⎟ ⎠ (6) As before, whilst U(X) is known to the agent, the probability distribution p(X) is not. Rather, given the agents direct experience of the supplier, the agent can determine a distribution over possible values for p(X). In the single dimensional case, a beta distribution was the natural choice over possible values of p(o = 1). In the multi-dimensional case, where p(X) itself is a vector of probabilities, the corresponding natural choice is the Dirichlet distribution, since this is a conjugate prior for multinomial proportions [3]. Given this distribution, the agent is then able to calculate an estimate of the expected utility of a contract. As before, this estimate is itself represented by an expected value given by: E[E[U]] = ˆp(X)U(X)T (7) and a variance, describing the uncertainty in this expected utility: Var(E[U]) = U(X)Cov(p(X))U(X)T (8) where: Cov(p(X)) E[(p(X) − ˆp(X))(p(X) − ˆp(X))T ] (9) Thus, whilst the single dimensional case naturally leads to a trust model in which the agents attempt to derive an estimate of probability that a contract will be successfully fulfilled, ˆp(o = 1), along with a scalar variance that describes the uncertainty in this probability, Var(p(o = 1)), in this case, the agents must derive an estimate of a vector of probabilities, ˆp(X), along with a covariance matrix, Cov(p(X)), that represents the uncertainty in p(X) given the observed contractual outcomes. At this point, it is interesting to note that the estimate in the single dimensional case, ˆp(o = 1), has a clear semantic meaning in relation to trust; it is the agents belief in the probability of a supplier successfully fulfilling a contract. However, in the multi-dimensional case the agent must determine ˆp(X), and since this describes the probability of all possible contract outcomes, including those that are completely un-fulfilled, this direct semantic interpretation is not present. In the next section, we describe the exemplar utility function that we shall use in the remainder of this paper. 4.1 Exemplar Utility Function The approach described so far is completely general, in that it applies to any utility function of the form described above, and also applies to the estimation of any joint probability distribution. In the remainder of this paper, for illustrative purposes, we shall limit the discussion to the simplest possible utility function that exhibits a dependence upon the correlations between the contract dimensions. That is, we consider the case that expected utility is dependent only on the marginal probabilities of each contract dimension being successfully fulfilled, rather than the full joint probabilities: U(X) = ⎛ ⎜ ⎜ ⎜ ⎝ u(oa = 1) u(ob = 1) u(oc = 1) ... ⎞ ⎟ ⎟ ⎟ ⎠ (10) Thus, ˆp(X) is a vector estimate of the probability of each contract dimension being successfully fulfilled, and maintains the clear semantic interpretation seen in the single dimensional case: ˆp(X) = ⎛ ⎜ ⎜ ⎜ ⎝ ˆp(oa = 1) ˆp(ob = 1) ˆp(oc = 1) ... ⎞ ⎟ ⎟ ⎟ ⎠ (11) The correlations between the contract dimensions affect the uncertainty in the expected utility. To see this, consider the covariance 1072 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) matrix that describes this uncertainty, Cov(p(X)), is now given by: Cov(p(X)) = ⎛ ⎜ ⎜ ⎜ ⎝ Va Cab Cac ... Cab Vb Cbc ... Cac Cbc Vc ... ... ... ... ⎞ ⎟ ⎟ ⎟ ⎠ (12) In this matrix, the diagonal terms, Va, Vb and Vc, represent the uncertainties in p(oa = 1), p(ob = 1) and p(oc = 1) within p(X). The off-diagonal terms, Cab, Cac and Cbc, represent the correlations between these probabilities. In the next section, we use the Dirichlet distribution to calculate both ˆp(X) and Cov(p(X)) from an agents direct experience of previous contract outcomes. We first illustrate why this is necessary by considering an alternative approach to modelling multi-dimensional contracts whereby an agent na¨ıvely assumes that the dimensions are independent, and thus, it models each individually by separate beta distributions (as in the single dimensional case we presented in section 3). This is actually equivalent to setting the off-diagonal terms within the covariance matrix, Cov(p(X)), to zero. However, doing so can lead an agent to assume that its estimate of the expected utility of the contract is more accurate than it actually is. To illustrate this, consider a specific scenario with the following values: u(oa = 1) = u(ob = 1) = 1 and Va = Vb = 0.2. In this case, Var(E[U]) = 0.4(1 + Cab), and thus, if the correlation Cab is ignored then the variance in the expected utility is 0.4. However, if the contract outcomes are completely correlated then Cab = 1 and Var(E[U]) is actually 0.8. Thus, in order to have an accurate estimate of the variance of the expected contract utility, and to make a rational decision, it is essential that the agent is able to represent and calculate these correlation terms. In the next section, we describe how an agent may do so using the Dirichlet distribution. 4.2 The Dirichlet Distribution In this section, we describe how the agent may use its direct experience of previous contracts, and the standard results of the Dirichlet distribution, to determine an estimate of the probability that each contract dimension will be successful fulfilled, ˆp(X), and a measure of the uncertainties in these probabilities that expresses the correlations between the contract dimensions, Cov(p(X)). We first consider the calculation of ˆp(X) and the diagonal terms of the covariance matrix Cov(p(X)). As described above, the derivation of these results is identical to the case of the single dimensional beta distribution, where out of N contract outcomes, n are successfully fulfilled. In the multi-dimensional case, however, we have a vector {na, nb, nc, ...} that represents the number of outcomes for which each of the individual contract dimensions were successfully fulfilled. Thus, in terms of the standard Dirichlet parameters where αa = na + 1 and α0 = N + 2, the agent can estimate the probability of this contract dimension being successfully fulfilled: ˆp(oa = 1) = αa α0 = na + 1 N + 2 and can also calculate the variance in any contract dimension: Va = αa(α0 − αa) α2 0(1 + α0) = (na + 1)(N − na + 1) (N + 2)2(N + 3) However, calculating the off-diagonal terms within Cov(p(X)) is more complex since it is necessary to consider the correlations between the contract dimensions. Thus, for each pair of dimensions (i.e. a and b), we must consider all possible combinations of contract outcomes, and thus we define nab ij as the number of contract outcomes for which both oa = i and ob = j. For example, nab 10 represents the number of contracts for which oa = 1 and ob = 0. Now, using the standard Dirichlet notation, we can define αab ij nab ij + 1 for all i and j taking values 0 and 1, and then, to calculate the cross-correlations between contract pairs a and b, we note that the Dirichlet distribution over pair-wise joint probabilities is: Prob(pab) = Kab i∈{0,1} j∈{0,1} p(oa = i, ob = j)αab ij −1 where: i∈{0,1} j∈{0,1} p(oa = i, ob = j) = 1 and Kab is a normalising constant [3]. From this we can derive pair-wise probability estimates and variances: E[p(oa = i, ob = j)] = αab ij α0 (13) V [p(oa = i, ob = j)] = αab ij (α0 − αab ij ) α2 0(1 + α0) (14) where: α0 = i∈{0,1} j∈{0,1} αab ij (15) and in fact, α0 = N + 2, where N is the total number of contracts observed. Likewise, we can express the covariance in these pairwise probabilities in similar terms: C[p(oa = i, ob = j), p(oa = m, ob = n)] = −αab ij αab mn α2 0(1 + α0) Finally, we can use the expression: p(oa = 1) = j∈{0,1} p(oa = 1, ob = j) to determine the covariance Cab. To do so, we first simplify the notation by defining V ab ij V [p(oa = i, ob = j)] and Cab ijmn C[p(oa = i, ob = j), p(oa = m, ob = n)]. The covariance for the probability of positive contract outcomes is then the covariance between j∈{0,1} p(oa = 1, ob = j) and i∈{0,1} p(oa = i, ob = 1), and thus: Cab = Cab 1001 + Cab 1101 + Cab 1011 + V ab 11 . Thus, given a set of contract outcomes that represent the agents previous interactions with a supplier, we may use the Dirichlet distribution to calculate the mean and variance of the probability of any contract dimension being successfully fulfilled (i.e. ˆp(oa = 1) and Va). In addition, by a somewhat more complex procedure we can also calculate the correlations between these probabilities (i.e. Cab). This allows us to calculate an estimate of the probability that any contract dimension will be successfully fulfilled, ˆp(X), and also represent the uncertainty and correlations in these probabilities by the covariance matrix, Cov(p(X)). In turn, these results may be used to calculate the estimate and uncertainty in the expected utility of the contract. In the next section we present empirical results that show that in practise this formalism yields significant improvements in these estimates compared to the na¨ıve approximation using multiple independent beta distributions. 4.3 Empirical Comparison In order to evaluate the effectiveness of our formalism, and show the importance of the off-diagonal terms in Cov(p(X)), we compare two approaches: The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1073 −1 −0.5 0 0.5 1 0.2 0.4 0.6 0.8 Correlation (ρ) Var(E[U]) Dirichlet Distribution Indepedent Beta Distributions −1 −0.5 0 0.5 1 0.5 1 1.5 2 2.5 x 10 4 Correlation (ρ) Information(I) Dirichlet Distribution Indepedent Beta Distributions Figure 1: Plots showing (i) the variance of the expected contract utility and (ii) the information content of the estimates computed using the Dirichlet distribution and multiple independent beta distributions. Results are averaged over 106 runs, and the error bars show the standard error in the mean. • Dirichlet Distribution: We use the full Dirichlet distribution, as described above, to calculate ˆp(X) and Cov(p(X)) including all its off-diagonal terms that represent the correlations between the contract dimensions. • Independent Beta Distributions: We use independent beta distributions to represent each contract dimension, in order to calculate ˆp(X), and then, as described earlier, we approximate Cov(p(X)) and ignore the correlations by setting all the off-diagonal terms to zero. We consider a two-dimensional case where u(oa = 1) = 6 and u(ob = 1) = 2, since this allows us to plot ˆp(X) and Cov(p(X)) as ellipses in a two-dimensional plane, and thus explain the differences between the two approaches. Specifically, we initially allocate the agent some previous contract outcomes that represents its direct experience with a supplier. The number of contracts is drawn uniformly between 10 and 20, and the actual contract outcomes are drawn from an arbitrary joint distribution intended to induce correlations between the contract dimensions. For each set of contracts, we use the approaches described above to calculate ˆp(X) and Cov(p(X)), and hence, the variance in the expected contract utility, Var(E[U]). In addition, we calculate a scalar measure of the information content, I, of the covariance matrix Cov(p(X)), which is a standard way of measuring the uncertainty encoded within the covariance matrix [1]. More specifically, we calculate the determinant of the inverse of the covariance matrix: I = det(Cov(p(X))−1 ) (16) and note that the larger the information content, the more precise ˆp(X) will be, and thus, the better the estimate of the expected utility that the agent is able to calculate. Finally, we use the results 0.3 0.4 0.5 0.6 0.7 0.8 0.3 0.4 0.5 0.6 0.7 0.8 p(o =1) p(o=1) a b Dirichlet Distribution Indepedent Beta Distributions Figure 2: Examples of ˆp(X) and Cov(p(X)) plotted as second standard error ellipses. presented in section 4.2 to calculate the actual correlation, ρ, associated with this particular set of contract outcomes: ρ = Cab √ VaVb (17) where Cab, Va and Vb are calculated as described in section 4.2. The results of this analysis are shown in figure 1. Here we show the values of I and Var(E[U]) calculated by the agents, plotted against the correlation of the contract outcomes, ρ, that constituted their direct experience. The results are averaged over 106 simulation runs. Note that as expected, when the dimensions of the contract outcomes are uncorrelated (i.e. ρ = 0), then both approaches give the same results. However, the value of using our formalism with the full Dirichlet distribution is shown when the correlation between the dimensions increases (either negatively or positively). As can be seen, if we approximate the Dirichlet distribution with multiple independent beta distributions, all of the correlation information contained within the covariance matrix, Cov(p(X)), is lost, and thus, the information content of the matrix is much lower. The loss of this correlation information leads the variance of the expected utility of the contract to be incorrect (either over or under estimated depending on the correlation)5 , with the exact amount of mis-estimation depending on the actual utility function chosen (i.e. the values of u(oa = 1) and u(ob = 1)). In addition, in figure 2 we illustrate an example of the estimates calculated through both methods, for a single exemplar set of contract outcomes. We represent the probability estimates, ˆp(X), and the covariance matrix, Cov(p(X)), in the standard way as an ellipse [1]. That is, ˆp(X) determines the position of the center of the ellipse, Cov(p(X)) defines its size and shape. Note that whilst the ellipse resulting from the full Dirichlet formalism accurately reflects the true distribution (samples of which are plotted as points), that calculated by using multiple independent Beta distributions (and thus ignoring the correlations) results in a much larger ellipse that does not reflect the true distribution. The larger size of this ellipse is a result of the off-diagonal terms of the covariance matrix being set to zero, and corresponds to the agent miscalculating the uncertainty in the probability of each contract dimension being fulfilled. This, in turn, leads it to miscalculate the uncertainty in the expected utility of a contract (shown in figure 1 as Var(E[U]). 5. COMMUNICATING REPUTATION Having described how an individual agent can use its own direct experience of contract outcomes in order to estimate the probabil5 Note that the plots are not smooth due to the fact that given a limited number of contract outcomes, then the mean of Va and Vb do not vary smoothly with ρ. 1074 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) ity that a multi-dimensional contract will be successfully fulfilled, we now go on to consider how agents within an open multi-agent system can communicate these estimates to one another. This is commonly referred to as reputation and allows agents with limited direct experience of a supplier to make rational decisions. Both Jøsang and Ismail, and Teacy et al. present models whereby reputation is communicated between agents using the sufficient statistics of the beta distribution [6, 11]. This approach is attractive since these sufficient statistics are simple aggregations of contract outcomes (more precisely, they are simply the total number of contracts observed, N, and the number of these that were successfully fulfilled, n). Under the probabilistic framework of the beta distribution, reputation reports in this form may simply be aggregated with an agents own direct experience, in order to gain a more precise estimate based on a larger set of contract outcomes. We can immediately extend this approach to the multi-dimensional case considered here, by requiring that the agents exchange the sufficient statistics of the Dirichlet distribution instead of the beta distribution. In this case, for each pair of dimensions (i.e. a and b), the agents must communicate a vector of contract outcomes, N, which are the sufficient statistics of the Dirichlet distribution, given by: N =< nab ij > ∀a, b, i ∈ {0, 1}, j ∈ {0, 1} (18) Thus, an agent is able to communicate the sufficient statistics of its own Dirichlet distribution in terms of just 2d(d − 1) numbers (where d is the number of contract dimensions). For instance, in the case of three dimensions, N, is given by: N =< nab 00, nab 01, nab 10, nab 11, nac 00, nac 01, nac 10, nac 11, nbc 00, nbc 01, nbc 10, nbc 11 > and, hence, large sets of contract outcomes may be communicated within a relatively small message size, with no loss of information. Again, agents receiving these sufficient statistics may simply aggregate them with their own direct experience in order to gain a more precise estimate of the trustworthiness of a supplier. Finally, we note that whilst it is not the focus of our work here, by adopting the same principled approach as Jøsang and Ismail, and Teacy et al., many of the techniques that they have developed (such as discounting reports from unreliable agents, and filtering inconsistent reports from selfish agents) may be directly applied within this multi-dimensional model. However, we now go on to consider a new issue that arises in both the single and multi-dimensional models, namely the problems that arise when such aggregated sufficient statistics are propagated within decentralised agent networks. 6. RUMOUR PROPAGATION WITHIN REPUTATION SYSTEMS In the previous section, we described the use of sufficient statistics to communicate reputation, and we showed that by aggregating contract outcomes together into these sufficient statistics, a large number of contract outcomes can be represented and communicated in a compact form. Whilst, this is an attractive property, it can be problematic in practise, since the individual provenance of each contract outcome is lost in the aggregation. Thus, to ensure an accurate estimate, the reputation system must ensure that each observation of a contract outcome is included within the aggregated statistics no more than once. Within a centralised reputation system, where all agents report their direct experience to a trusted center, such double counting of contract outcomes is easy to avoid. However, in a decentralised reputation system, where agents communicate reputation to one another, and aggregate their direct experience with these reputation reports on-the-fly, avoiding double counting is much more difficult. a1 a2 a3 ¨ ¨¨ ¨¨ ¨¨B E T N1 N1 N1 + N2 Figure 3: Example of rumour propagation in a decentralised reputation system. For example, consider the case shown in figure 3 where three agents (a1 ... a3), each with some direct experience of a supplier, share reputation reports regarding this supplier. If agent a1 were to provide its estimate to agents a2 and a3 in the form of the sufficient statistics of its Dirichlet distribution, then these agents can aggregate these contract outcomes with their own, and thus obtain more precise estimates. If at a later stage, agent a2 were to send its aggregate vector of contract outcomes to agent a3, then agent a3 being unaware of the full history of exchanges, may attempt to combine these contract outcomes with its own aggregated vector. However, since both vectors contain a contribution from agent a1, these will be counted twice in the final aggregated vector, and will result in a biased and overconfident estimate. This is termed rumour propagation or data incest in the data fusion literature [9]. One possible solution would be to uniquely identify the source of each contract outcome, and then propagate each vector, along with its label, through the network. Agents can thus identify identical observations that have arrived through different routes, and after removing the duplicates, can aggregate these together to form their estimates. Whilst this appears to be attractive in principle, for a number of reasons, it is not always a viable solution in practise [12]. Firstly, agents may not actually wish to have their uniquely labelled contract outcomes passed around an open system, since such information may have commercial or practical significance that could be used to their disadvantage. As such, agents may only be willing to exchange identifiable contract outcomes with a small number of other agents (perhaps those that they have some sort of reciprocal relationship with). Secondly, the fact that there is no aggregation of the contract outcomes as they pass around the network means that the message size increases over time, and the ultimate size of these messages is bounded only by the number of agents within the system (possibly an extremely large number for a global system). Finally, it may actually be difficult to assign globally agreeable, consistent, and unique labels for each agent within an open system. In the next section, we develop a novel solution to the problem of rumour propagation within decentralised reputation systems. Our solution is based on an approach developed within the area of target tracking and data fusion [9]. It avoids the need to uniquely identify an agent, it allows agents to restrict the number of other agents who they reveal their private estimates to, and yet it still allows information to propagate throughout the network. 6.1 Private and Shared Information Our solution to rumour propagation within decentralised reputation systems introduces the notion of private information that an agent knows it has not communicated to any other agent, and shared information that has been communicated to, or received from, another agent. Thus, the agent can decompose its contract outcome vector, N, into two vectors, a private one, Np, that has not been communicated to another agent, and a shared one, Ns, that has been shared with, or received from, another agent: N = Np + Ns (19) Now, whenever an agent communicates reputation, it communicates both its private and shared vectors separately. Both the origThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1075 inating and receiving agents then update their two vectors appropriately. To understand this, consider the case that agent aα sends its private and shared contract outcome vectors, Nα p and Nα s , to agent aβ that itself has private and shared contract outcomes Nβ p and Nβ s . Each agent updates its vectors of contract outcomes according to the following procedure: • Originating Agent: Once the originating agent has sent both its shared and private contract outcome vectors to another agent, its private information is no longer private. Thus, it must remove the contract outcomes that were in its private vector, and add them into its shared vector: Nα s ← Nα s + Nα p Nα p ← ∅. • Receiving Agent: The goal of the receiving agent is to accumulate the largest number contract outcomes (since this will result in the most precise estimate) without including shared information from both itself and the other agent (since this may result in double counting of contract outcomes). It has two choices depending on the total number of contract outcomes6 within its own shared vector, Nβ s , and within that of the originating agent, Nα s . Thus, it updates its vector according to the procedure below: - Nβ s > Nα s : If the receiving agents shared vector represents a greater number of contract outcomes than that of the shared vector of the originating agent, then the agent combines its shared vector with the private vector of the originating agent: Nβ s ← Nβ s + Nα p Nβ p unchanged. - Nβ s < Nα s : Alternatively if the receiving agents shared vector represents a smaller number contract outcomes than that of the shared vector of the originating agent, then the receiving agent discards its own shared vector and forms a new one from both the private and shared vectors of the originating agent: Nβ s ← Nα s + Nα p Nβ p unchanged. In the case that Nβ s = Nα s then either option is appropriate. Once the receiving agent has updated its sets, it uses the contract outcomes within both to form its trust estimate. If agents receive several vectors simultaneously, this approach generalises to the receiving agent using the largest shared vector, and the private vectors of itself and all the originating agents to form its new shared vector. This procedure has a number of attractive properties. Firstly, since contract outcomes in an agents shared vector are never combined with those in the shared vector of another agent, outcomes that originated from the same agent are never combined together, and thus, rumour propagation is completely avoided. However, since the receiving agent may discard its own shared vector, and adopt the shared vector of the originating agent, information is still propagated around the network. Moreover, since contract outcomes are aggregated together within the private and shared vectors, the message size is constant and does not increase as the number of interactions increases. Finally, an agent only communicates its own private contract outcomes to its immediate neighbours. If this agent 6 Note that this may be calculated from N = nab 00 +nab 01 +nab 10 +nab 11. subsequently passes it on, it does so as unidentifiable aggregated information within its shared information. Thus, an agent may limit the number of agents with which it is willing to reveal identifiable contract outcomes, and yet these contract outcomes can still propagate within the network, and thus, improve estimates of other agents. Next, we demonstrate empirically that these properties can indeed be realised in practise. 6.2 Empirical Comparison In order to evaluate the effectiveness of this procedure we simulated random networks consisting of ten agents. Each agent has some direct experience of interacting with a supplier (as described in section 4.3). At each iteration of the simulation, it interacts with its immediate neighbours and exchanges reputation reports through the sufficient statistics of their Dirichlet distributions. We compare our solution to two of the most obvious decentralised alternatives: • Private and Shared Information: The agents follow the procedure described in the previous section. That is, they maintain separate private and shared vectors of contract outcomes, and at each iteration they communicate both these vectors to their immediate neighbours. • Rumour Propagation: The agents do not differentiate between private and shared contract outcomes. At the first iteration they communicate all of the contract outcomes that constitute their direct experience. In subsequent iterations, they propagate contract outcomes that they receive from any of the neighbours, to all their other immediate neighbours. • Private Information Only: The agents only communicate the contract outcomes that constitute their direct experience. In all cases, at each iteration, the agents use the Dirichlet distribution in order to calculate their trust estimates. We compare these three decentralised approaches to a centralised reputation system: • Centralised Reputation: All the agents pass their direct experience to a centralised reputation system that aggregates them together, and passes this estimate back to each agent. This centralised solution makes the most effective use of information available in the network. However, most real world problems demand decentralised solutions due to scalability, modularity and communication concerns. Thus, this centralised solution is included since it represents the optimal case, and allows us to benchmark our decentralised solution. The results of these comparisons are shown in figure 4. Here we show the sum of the information content of each agents covariance matrix (calculated as discussed earlier in section 4.3), for each of these four different approaches. We first note that where private information only is communicated, there is no change in information after the first iteration. Once each agent has received the direct experience of its immediate neighbours, no further increase in information can be achieved. This represents the minimum communication, and it exhibits the lowest total information of the four cases. Next, we note that in the case of rumour propagation, the information content increases continually, and rapidly exceeds the centralised reputation result. The fact that the rumour propagation case incorrectly exceeds this limit, indicates that it is continuously counting the same contract outcomes as they cycle around the network, in the belief that they are independent events. Finally, we note that using private and shared information represents a compromise between the private information only case and the centralised reputation case. Information is still allowed to propagate around the network, however rumours are eliminated. As before, we also plot a single instance of the trust estimates from one agent (i.e. ˆp(X) and Cov(p(X))) as a set of ellipses on a 1076 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1 2 3 4 5 10 4 10 6 10 8 10 10 Iteration Information(I) Private & Shared Information Rumour Propagation Private Information Only Centralised Reputation Figure 4: Sum of information over all agents as a function of the communication iteration. two-dimensional plane (along with samples from the true distribution). As expected, the centralised reputation system achieves the best estimate of the true distribution, since it uses the direct experience of all agents. The private information only case shows the largest ellipse since it propagates the least information around the network. The rumour propagation case shows the smallest ellipse, but it is inconsistent with the actual distribution p(X). Thus, propagating rumours around the network and double counting contract outcomes in the belief that they are independent events, results in an overconfident estimate. However, we note that our solution, using separate vectors of private and shared information, allows us to propagate more information than the private information only case, but we completely avoid the problems of rumour propagation. Finally, we consider the effect that this has on the agents'' calculation of the expected utility of the contract. We assume the same utility function as used in section 4.3 (i.e. u(oa = 1) = 6 and u(ob = 1) = 2), and in table 1 we present the estimate of the expected utility, and its standard deviation calculated for all four cases by a single agent at iteration five (after communication has ceased to have any further effect for all methods other than rumour propagation). We note that the rumour propagation case is clearly inconsistent with the centralised reputation system, since its standard deviation is too small and does not reflect the true uncertainty in the expected utility, given the contract outcomes. However, we observe that our solution represents the closest case to the centralised reputation system, and thus succeeds in propagating information throughout the network, whilst also avoiding bias and overconfidence. The exact difference between it and the centralised reputation system depends upon the topology of the network, and the history of exchanges that take place within it. 7. CONCLUSIONS In this paper we addressed the need for a principled probabilistic model of computational trust that deals with contracts that have multiple correlated dimensions. Our starting point was an agent estimating the expected utility of a contract, and we showed that this leads to a model of computational trust that uses the Dirichlet distribution to calculate a trust estimate from the direct experience of an agent. We then showed how agents may use the sufficient statistics of this Dirichlet distribution to represent and communicate reputation within a decentralised reputation system, and we presented a solution to rumour propagation within these systems. Our future work in this area is to extend the exchange of reputation to the case where contracts are not homogeneous. That is, not all agents observe the same contract dimensions. This is a challenging extension, since in this case, the sufficient statistics of the Dirichlet distribution can not be used directly. However, by 0.2 0.3 0.4 0.5 0.6 0.7 0.1 0.2 0.3 0.4 0.5 0.6 0.7 p(o =1) p(o=1) a b Private & Shared Information Rumour Propagation Private Information Only Centralised Reputation Figure 5: Instances of ˆp(X) and Cov(p(X)) plotted as second standard error ellipses after 5 communication iterations. Method E[E[U]] ± Var(E[U]) Private and Shared Information 3.18 ± 0.54 Rumour Propagation 3.33 ± 0.07 Private Information Only 3.20 ± 0.65 Centralised Reputation 3.17 ± 0.42 Table 1: Estimated expected utility and its standard error as calculated by a single agent after 5 communication iterations. addressing this challenge, we hope to be able to apply these techniques to a setting in which a suppliers provides a range of services whose failures are correlated, and agents only have direct experiences of different subsets of these services. 8. ACKNOWLEDGEMENTS This research was undertaken as part of the ALADDIN (Autonomous Learning Agents for Decentralised Data and Information Networks) project and is jointly funded by a BAE Systems and EPSRC strategic partnership (EP/C548051/1). 9. REFERENCES [1] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan. Estimation with Applications to Tracking and Navigation. Wiley Interscience, 2001. [2] C. Boutilier. The foundations of expected expected utility. In Proc. of the 4th Int. Joint Conf. on on Artificial Intelligence, pages 285-290, Acapulco, Mexico, 2003. [3] M. Evans, N. Hastings, and B. Peacock. Statistical Distributions. John Wiley & Sons, Inc., 1993. [4] N. Griffiths. Task delegation using experience-based multi-dimensional trust. In Proc. of the 4th Int. Joint Conf. on Autonomous Agents and Multiagent Systems, pages 489-496, New York, USA, 2005. [5] N. Gukrai, D. DeAngelis, K. K. Fullam, and K. S. Barber. Modelling multi-dimensional trust. In Proc. of the 9th Int. Workshop on Trust in Agent Systems, Hakodate, Japan, 2006. [6] A. Jøsang and R. Ismail. The beta reputation system. In Proc. of the 15th Bled Electronic Commerce Conf., pages 324-337, Bled, Slovenia, 2002. [7] E. M. Maximilien and M. P. Singh. Agent-based trust model involving multiple qualities. In Proc. of the 4th Int. Joint Conf. on Autonomous Agents and Multiagent Systems, pages 519-526, Utrecht, The Netherlands, 2005. [8] S. D. Ramchurn, D. Hunyh, and N. R. Jennings. Trust in multi-agent systems. Knowledge Engineering Review, 19(1):1-25, 2004. [9] S. Reece and S. Roberts. Robust, low-bandwidth, multi-vehicle mapping. In Proc. of the 8th Int. Conf. on Information Fusion, Philadelphia, USA, 2005. [10] J. Sabater and C. Sierra. REGRET: A reputation model for gregarious societies. In Proc. of the 4th Workshop on Deception, Fraud and Trust in Agent Societies, pages 61-69, Montreal, Canada, 2001. [11] W. T. L. Teacy, J. Patel, N. R. Jennings, and M. Luck. TRAVOS: Trust and reputation in the context of inaccurate information sources. Autonomous Agents and Multi-Agent Systems, 12(2):183-198, 2006. [12] S. Utete. Network Management in Decentralised Sensing Systems. PhD thesis, University of Oxford, UK, 1994. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1077
I-43
Dynamics Based Control with an Application to Area-Sweeping Problems
In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation.
[ "dynam base control", "dynam base control", "control", "area-sweep problem", "stochast environ", "partial observ markov decis problem", "system dynam", "extend markov track", "reward function", "multi-agent system", "target dynam", "action-select random", "tag game", "environ design level", "user level", "agent level", "robot" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "R", "U", "U", "M", "U", "M", "U" ]
Dynamics Based Control with an Application to Area-Sweeping Problems Zinovi Rabinovich Engineering and Computer Science Hebrew University of Jerusalem Jerusalem, Israel nomad@cs.huji.ac.il Jeffrey S. Rosenschein Engineering and Computer Science Hebrew University of Jerusalem Jerusalem, Israel jeff@cs.huji.ac.il Gal A. Kaminka The MAVERICK Group Department of Computer Science Bar Ilan University, Israel galk@cs.biu.ac.il ABSTRACT In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. Categories and Subject Descriptors I.2.8 [Problem Solving, Control Methods, and Search]: Control Theory; I.2.9 [Robotics]; I.2.11 [Distributed Artificial Intelligence]: Intelligent Agents General Terms Algorithms, Theory 1. INTRODUCTION Planning and control constitutes a central research area in multiagent systems and artificial intelligence. In recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments. In this framework, the planning and control problem is often addressed by imposing a reward function, and computing a policy (of choosing actions) that is optimal, in the sense that it will result in the highest expected utility. While theoretically attractive, the complexity of optimally solving a POMDP is prohibitive [8, 7]. We take an alternative view of planning in stochastic environments. We do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics. The idea here is to view planexecution as a process that compels a (stochastic) system to change, and a plan as a dynamic process that shapes that change according to desired criteria. We call this general planning framework Dynamics Based Control (DBC). In DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics. As actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16]. Here, optimality is measured in terms of probability of deviation magnitudes. In this paper, we present the structure of Dynamics Based Control. We show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC. EMT is an efficient instantiation of DBC. To evaluate DBC, we carried out a set of experiments applying multi-target EMT to the Tag Game [11]; this is a variant on the area sweeping problem, where an agent is trying to tag a moving target (quarry) whose position is not known with certainty. Experimental data demonstrates that even with a simple model of the environment and a simple design of target dynamics, high success rates can be produced both in catching the quarry, and in surprising the quarry (as expressed by the observed entropy of the controlled agents position). The paper is organized as follows. In Section 2 we motivate DBC using area-sweeping problems, and discuss related work. Section 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments. This is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4. That section also discusses the limitations of EMT-based control relative to the general DBC framework. Experimental settings and results are then presented in Section 5. Section 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work. 790 978-81-904262-7-5 (RPS) c 2007 IFAAMAS 2. MOTIVATION AND RELATED WORK Many real-life scenarios naturally have a stochastic target dynamics specification, especially those domains where there exists no ultimate goal, but rather system behavior (with specific properties) that has to be continually supported. For example, security guards perform persistent sweeps of an area to detect any sign of intrusion. Cunning thieves will attempt to track these sweeps, and time their operation to key points of the guards'' motion. It is thus advisable to make the guards'' motion dynamics appear irregular and random. Recent work by Paruchuri et al. [10] has addressed such randomization in the context of single-agent and distributed POMDPs. The goal in that work was to generate policies that provide a measure of action-selection randomization, while maintaining rewards within some acceptable levels. Our focus differs from this work in that DBC does not optimize expected rewards-indeed we do not consider rewards at all-but instead maintains desired dynamics (including, but not limited to, randomization). The Game of Tag is another example of the applicability of the approach. It was introduced in the work by Pineau et al. [11]. There are two agents that can move about an area, which is divided into a grid. The grid may have blocked cells (holes) into which no agent can move. One agent (the hunter) seeks to move into a cell occupied by the other (the quarry), such that they are co-located (this is a successful tag). The quarry seeks to avoid the hunter agent, and is always aware of the hunters position, but does not know how the hunter will behave, which opens up the possibility for a hunter to surprise the prey. The hunter knows the quarrys probabilistic law of motion, but does not know its current location. Tag is an instance of a family of area-sweeping (pursuit-evasion) problems. In [11], the hunter modeled the problem using a POMDP. A reward function was defined, to reflect the desire to tag the quarry, and an action policy was computed to optimize the reward collected over time. Due to the intractable complexity of determining the optimal policy, the action policy computed in that paper was essentially an approximation. In this paper, instead of formulating a reward function, we use EMT to solve the problem, by directly specifying the target dynamics. In fact, any search problem with randomized motion, the socalled class of area sweeping problems, can be described through specification of such target system dynamics. Dynamics Based Control provides a natural approach to solving these problems. 3. DYNAMICS BASED CONTROL The specification of Dynamics Based Control (DBC) can be broken into three interacting levels: Environment Design Level, User Level, and Agent Level. • Environment Design Level is concerned with the formal specification and modeling of the environment. For example, this level would specify the laws of physics within the system, and set its parameters, such as the gravitation constant. • User Level in turn relies on the environment model produced by Environment Design to specify the target system dynamics it wishes to observe. The User Level also specifies the estimation or learning procedure for system dynamics, and the measure of deviation. In the museum guard scenario above, these would correspond to a stochastic sweep schedule, and a measure of relative surprise between the specified and actual sweeping. • Agent Level in turn combines the environment model from the Environment Design level, the dynamics estimation procedure, the deviation measure and the target dynamics specification from User Level, to produce a sequence of actions that create system dynamics as close as possible to the targeted specification. As we are interested in the continual development of a stochastic system, such as happens in classical control theory [16] and continual planning [4], as well as in our example of museum sweeps, the question becomes how the Agent Level is to treat the deviation measurements over time. To this end, we use a probability threshold-that is, we would like the Agent Level to maximize the probability that the deviation measure will remain below a certain threshold. Specific action selection then depends on system formalization. One possibility would be to create a mixture of available system trends, much like that which happens in Behavior-Based Robotic architectures [1]. The other alternative would be to rely on the estimation procedure provided by the User Level-to utilize the Environment Design Level model of the environment to choose actions, so as to manipulate the dynamics estimator into believing that a certain dynamics has been achieved. Notice that this manipulation is not direct, but via the environment. Thus, for strong enough estimator algorithms, successful manipulation would mean a successful simulation of the specified target dynamics (i.e., beyond discerning via the available sensory input). DBC levels can also have a back-flow of information (see Figure 1). For instance, the Agent Level could provide data about target dynamics feasibility, allowing the User Level to modify the requirement, perhaps focusing on attainable features of system behavior. Data would also be available about the system response to different actions performed; combined with a dynamics estimator defined by the User Level, this can provide an important tool for the environment model calibration at the Environment Design Level. UserEnv. Design Agent Model Ideal Dynamics Estimator Estimator Dynamics Feasibility System Response Data Figure 1: Data flow of the DBC framework Extending upon the idea of Actor-Critic algorithms [5], DBC data flow can provide a good basis for the design of a learning algorithm. For example, the User Level can operate as an exploratory device for a learning algorithm, inferring an ideal dynamics target from the environment model at hand that would expose and verify most critical features of system behavior. In this case, feasibility and system response data from the Agent Level would provide key information for an environment model update. In fact, the combination of feasibility and response data can provide a basis for the application of strong learning algorithms such as EM [2, 9]. 3.1 DBC for Markovian Environments For a Partially Observable Markovian Environment, DBC can be specified in a more rigorous manner. Notice how DBC discards rewards, and replaces it by another optimality criterion (structural differences are summarized in Table 1): • Environment Design level is to specify a tuple < S, A, T, O, Ω, s0 >, where: - S is the set of all possible environment states; - s0 is the initial state of the environment (which can also be viewed as a probability distribution over S); The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791 - A is the set of all possible actions applicable in the environment; - T is the environments probabilistic transition function: T : S ×A → Π(S). That is, T(s |a, s) is the probability that the environment will move from state s to state s under action a; - O is the set of all possible observations. This is what the sensor input would look like for an outside observer; - Ω is the observation probability function: Ω : S × A × S → Π(O). That is, Ω(o|s , a, s) is the probability that one will observe o given that the environment has moved from state s to state s under action a. • User Level, in the case of Markovian environment, operates on the set of system dynamics described by a family of conditional probabilities F = {τ : S × A → Π(S)}. Thus specification of target dynamics can be expressed by q ∈ F, and the learning or tracking algorithm can be represented as a function L : O×(A×O)∗ → F; that is, it maps sequences of observations and actions performed so far into an estimate τ ∈ F of system dynamics. There are many possible variations available at the User Level to define divergence between system dynamics; several of them are: - Trace distance or L1 distance between two distributions p and q defined by D(p(·), q(·)) = 1 2 x |p(x) − q(x)| - Fidelity measure of distance F(p(·), q(·)) = x p(x)q(x) - Kullback-Leibler divergence DKL(p(·) q(·)) = x p(x) log p(x) q(x) Notice that the latter two are not actually metrics over the space of possible distributions, but nevertheless have meaningful and important interpretations. For instance, KullbackLeibler divergence is an important tool of information theory [3] that allows one to measure the price of encoding an information source governed by q, while assuming that it is governed by p. The User Level also defines the threshold of dynamics deviation probability θ. • Agent Level is then faced with a problem of selecting a control signal function a∗ to satisfy a minimization problem as follows: a∗ = arg min a Pr(d(τa, q) > θ) where d(τa, q) is a random variable describing deviation of the dynamics estimate τa, created by L under control signal a, from the ideal dynamics q. Implicit in this minimization problem is that L is manipulated via the environment, based on the environment model produced by the Environment Design Level. 3.2 DBC View of the State Space It is important to note the complementary view that DBC and POMDPs take on the state space of the environment. POMDPs regard state as a stationary snap-shot of the environment; whatever attributes of state sequencing one seeks are reached through properties of the control process, in this case reward accumulation. This can be viewed as if the sequencing of states and the attributes of that sequencing are only introduced by and for the controlling mechanism-the POMDP policy. DBC concentrates on the underlying principle of state sequencing, the system dynamics. DBCs target dynamics specification can use the environments state space as a means to describe, discern, and preserve changes that occur within the system. As a result, DBC has a greater ability to express state sequencing properties, which are grounded in the environment model and its state space definition. For example, consider the task of moving through rough terrain towards a goal and reaching it as fast as possible. POMDPs would encode terrain as state space points, while speed would be ensured by negative reward for every step taken without reaching the goalaccumulating higher reward can be reached only by faster motion. Alternatively, the state space could directly include the notion of speed. For POMDPs, this would mean that the same concept is encoded twice, in some sense: directly in the state space, and indirectly within reward accumulation. Now, even if the reward function would encode more, and finer, details of the properties of motion, the POMDP solution will have to search in a much larger space of policies, while still being guided by the implicit concept of the reward accumulation procedure. On the other hand, the tactical target expression of variations and correlations between position and speed of motion is now grounded in the state space representation. In this situation, any further constraints, e.g., smoothness of motion, speed limits in different locations, or speed reductions during sharp turns, are explicitly and uniformly expressed by the tactical target, and can result in faster and more effective action selection by a DBC algorithm. 4. EMT-BASED CONTROL AS A DBC Recently, a control algorithm was introduced called EMT-based Control [13], which instantiates the DBC framework. Although it provides an approximate greedy solution in the DBC sense, initial experiments using EMT-based control have been encouraging [14, 15]. EMT-based control is based on the Markovian environment definition, as in the case with POMDPs, but its User and Agent Levels are of the Markovian DBC type of optimality. • User Level of EMT-based control defines a limited-case target system dynamics independent of action: qEMT : S → Π(S). It then utilizes the Kullback-Leibler divergence measure to compose a momentary system dynamics estimator-the Extended Markov Tracking (EMT) algorithm. The algorithm keeps a system dynamics estimate τt EMT that is capable of explaining recent change in an auxiliary Bayesian system state estimator from pt−1 to pt, and updates it conservatively using Kullback-Leibler divergence. Since τt EMT and pt−1,t are respectively the conditional and marginal probabilities over the systems state space, explanation simply means that pt(s ) = s τt EMT (s |s)pt−1(s), and the dynamics estimate update is performed by solving a 792 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: Structure of POMDP vs. Dynamics-Based Control in Markovian Environment Level Approach MDP Markovian DBC Environment < S, A, T, O, Ω >,where S - set of states A - set of actions Design T : S × A → Π(S) - transition O - observation set Ω : S × A × S → Π(O) User r : S × A × S → R q : S × A → Π(S) F(π∗ ) → r L(o1, ..., ot) → τ r - reward function q - ideal dynamics F - reward remodeling L - dynamics estimator θ - threshold Agent π∗ = arg max π E[ γt rt] π∗ = arg min π Prob(d(τ q) > θ) minimization problem: τt EMT = H[pt, pt−1, τt−1 EMT ] = arg min τ DKL(τ × pt−1 τt−1 EMT × pt−1) s.t. pt(s ) = s (τ × pt−1)(s , s) pt−1(s) = s (τ × pt−1)(s , s) • Agent Level in EMT-based control is suboptimal with respect to DBC (though it remains within the DBC framework), performing greedy action selection based on prediction of EMTs reaction. The prediction is based on the environment model provided by the Environment Design level, so that if we denote by Ta the environments transition function limited to action a, and pt−1 is the auxiliary Bayesian system state estimator, then the EMT-based control choice is described by a∗ = arg min a∈A DKL(H[Ta × pt, pt, τt EMT ] qEMT × pt−1) Note that this follows the Markovian DBC framework precisely: the rewarding optimality of POMDPs is discarded, and in its place a dynamics estimator (EMT in this case) is manipulated via action effects on the environment to produce an estimate close to the specified target system dynamics. Yet as we mentioned, naive EMTbased control is suboptimal in the DBC sense, and has several additional limitations that do not exist in the general DBC framework (discussed in Section 4.2). 4.1 Multi-Target EMT At times, there may exist several behavioral preferences. For example, in the case of museum guards, some art items are more heavily guarded, requiring that the guards stay more often in their close vicinity. On the other hand, no corner of the museum is to be left unchecked, which demands constant motion. Successful museum security would demand that the guards adhere to, and balance, both of these behaviors. For EMT-based control, this would mean facing several tactical targets {qk}K k=1, and the question becomes how to merge and balance them. A balancing mechanism can be applied to resolve this issue. Note that EMT-based control, while selecting an action, creates a preference vector over the set of actions based on their predicted performance with respect to a given target. If these preference vectors are normalized, they can be combined into a single unified preference. This requires replacement of standard EMT-based action selection by the algorithm below [15]: • Given: - a set of target dynamics {qk}K k=1, - vector of weights w(k) • Select action as follows - For each action a ∈ A predict the future state distribution ¯pa t+1 = Ta ∗ pt; - For each action, compute Da = H(¯pa t+1, pt, PDt) - For each a ∈ A and qk tactical target, denote V (a, k) = DKL (Da qk) pt . Let Vk(a) = 1 Zk V (a, k), where Zk = a∈A V (a, k) is a normalization factor. - Select a∗ = arg min a k k=1 w(k)Vk(a) The weights vector w = (w1, ..., wK ) allows the additional tuning of importance among target dynamics without the need to redesign the targets themselves. This balancing method is also seamlessly integrated into the EMT-based control flow of operation. 4.2 EMT-based Control Limitations EMT-based control is a sub-optimal (in the DBC sense) representative of the DBC structure. It limits the User by forcing EMT to be its dynamic tracking algorithm, and replaces Agent optimization by greedy action selection. This kind of combination, however, is common for on-line algorithms. Although further development of EMT-based controllers is necessary, evidence so far suggests that even the simplest form of the algorithm possesses a great deal of power, and displays trends that are optimal in the DBC sense of the word. There are two further, EMT-specific, limitations to EMT-based control that are evident at this point. Both already have partial solutions and are subjects of ongoing research. The first limitation is the problem of negative preference. In the POMDP framework for example, this is captured simply, through The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793 the appearance of values with different signs within the reward structure. For EMT-based control, however, negative preference means that one would like to avoid a certain distribution over system development sequences; EMT-based control, however, concentrates on getting as close as possible to a distribution. Avoidance is thus unnatural in native EMT-based control. The second limitation comes from the fact that standard environment modeling can create pure sensory actions-actions that do not change the state of the world, and differ only in the way observations are received and the quality of observations received. Since the world state does not change, EMT-based control would not be able to differentiate between different sensory actions. Notice that both of these limitations of EMT-based control are absent from the general DBC framework, since it may have a tracking algorithm capable of considering pure sensory actions and, unlike Kullback-Leibler divergence, a distribution deviation measure that is capable of dealing with negative preference. 5. EMT PLAYING TAG The Game of Tag was first introduced in [11]. It is a single agent problem of capturing a quarry, and belongs to the class of area sweeping problems. An example domain is shown in Figure 2. 0 51 2 3 4 6 7 8 10 12 13 161514 17 18 19 2221 23 9 11Q A 20 Figure 2: Tag domain; an agent (A) attempts to seek and capture a quarry (Q) The Game of Tag extremely limits the agents perception, so that the agent is able to detect the quarry only if they are co-located in the same cell of the grid world. In the classical version of the game, co-location leads to a special observation, and the Tag'' action can be performed. We slightly modify this setting: the moment that both agents occupy the same cell, the game ends. As a result, both the agent and its quarry have the same motion capability, which allows them to move in four directions, North, South, East, and West. These form a formal space of actions within a Markovian environment. The state space of the formal Markovian environment is described by the cross-product of the agent and quarrys positions. For Figure 2, it would be S = {s0, ..., s23} × {s0, ..., s23}. The effects of an action taken by the agent are deterministic, but the environment in general has a stochastic response due to the motion of the quarry. With probability q0 1 it stays put, and with probability 1 − q0 it moves to an adjacent cell further away from the 1 In our experiments this was taken to be q0 = 0.2. agent. So for the instance shown in Figure 2 and q0 = 0.1: P(Q = s9|Q = s9, A = s11) = 0.1 P(Q = s2|Q = s9, A = s11) = 0.3 P(Q = s8|Q = s9, A = s11) = 0.3 P(Q = s14|Q = s9, A = s11) = 0.3 Although the evasive behavior of the quarry is known to the agent, the quarrys position is not. The only sensory information available to the agent is its own location. We use EMT and directly specify the target dynamics. For the Game of Tag, we can easily formulate three major trends: catching the quarry, staying mobile, and stalking the quarry. This results in the following three target dynamics: Tcatch(At+1 = si|Qt = sj, At = sa) ∝ 1 si = sj 0 otherwise Tmobile(At+1 = si|Qt = so, At = sj) ∝ 0 si = sj 1 otherwise Tstalk(At+1 = si|Qt = so, At = sj) ∝ 1 dist(si,so) Note that none of the above targets are directly achievable; for instance, if Qt = s9 and At = s11, there is no action that can move the agent to At+1 = s9 as required by the Tcatch target dynamics. We ran several experiments to evaluate EMT performance in the Tag Game. Three configurations of the domain shown in Figure 3 were used, each posing a different challenge to the agent due to partial observability. In each setting, a set of 1000 runs was performed with a time limit of 100 steps. In every run, the initial position of both the agent and its quarry was selected at random; this means that as far as the agent was concerned, the quarrys initial position was uniformly distributed over the entire domain cell space. We also used two variations of the environment observability function. In the first version, observability function mapped all joint positions of hunter and quarry into the position of the hunter as an observation. In the second, only those joint positions in which hunter and quarry occupied different locations were mapped into the hunters location. The second version in fact utilized and expressed the fact that once hunter and quarry occupy the same cell the game ends. The results of these experiments are shown in Table 2. Balancing [15] the catch, move, and stalk target dynamics described in the previous section by the weight vector [0.8, 0.1, 0.1], EMT produced stable performance in all three domains. Although direct comparisons are difficult to make, the EMT performance displayed notable efficiency vis-a-vis the POMDP approach. In spite of a simple and inefficient Matlab implementation of the EMT algorithm, the decision time for any given step averaged significantly below 1 second in all experiments. For the irregular open arena domain, which proved to be the most difficult, 1000 experiment runs bounded by 100 steps each, a total of 42411 steps, were completed in slightly under 6 hours. That is, over 4 × 104 online steps took an order of magnitude less time than the offline computation of POMDP policy in [11]. The significance of this differential is made even more prominent by the fact that, should the environment model parameters change, the online nature of EMT would allow it to maintain its performance time, while the POMDP policy would need to be recomputed, requiring yet again a large overhead of computation. We also tested the behavior cell frequency entropy, empirical measures from trial data. As Figure 4 and Figure 5 show, empir794 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) A Q Q A 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 A Q Figure 3: These configurations of the Tag Game space were used: a) multiple dead-end, b) irregular open arena, c) circular corridor Table 2: Performance of the EMT-based solution in three Tag Game domains and two observability models: I) omniposition quarry, II) quarry is not at hunters position Model Domain Capture% E(Steps) Time/Step I Dead-ends 100 14.8 72(mSec) Arena 80.2 42.4 500(mSec) Circle 91.4 34.6 187(mSec) II Dead-ends 100 13.2 91(mSec) Arena 96.8 28.67 396(mSec) Circle 94.4 31.63 204(mSec) ical entropy grows with the length of interaction. For runs where the quarry was not captured immediately, the entropy reaches between 0.85 and 0.952 for different runs and scenarios. As the agent actively seeks the quarry, the entropy never reaches its maximum. One characteristic of the entropy graph for the open arena scenario particularly caught our attention in the case of the omniposition quarry observation model. Near the maximum limit of trial length (100 steps), entropy suddenly dropped. Further analysis of the data showed that under certain circumstances, a fluctuating behavior occurs in which the agent faces equally viable versions of quarry-following behavior. Since the EMT algorithm has greedy action selection, and the state space does not encode any form of commitment (not even speed or acceleration), the agent is locked within a small portion of cells. It is essentially attempting to simultaneously follow several courses of action, all of which are consistent with the target dynamics. This behavior did not occur in our second observation model, since it significantly reduced the set of eligible courses of action-essentially contributing to tie-breaking among them. 6. DISCUSSION The design of the EMT solution for the Tag Game exposes the core difference in approach to planning and control between EMT or DBC, on the one hand, and the more familiar POMDP approach, on the other. POMDP defines a reward structure to optimize, and influences system dynamics indirectly through that optimization. EMT discards any reward scheme, and instead measures and influences system dynamics directly. 2 Entropy was calculated using log base equal to the number of possible locations within the domain; this properly scales entropy expression into the range [0, 1] for all domains. Thus for the Tag Game, we did not search for a reward function that would encode and express our preference over the agents behavior, but rather directly set three (heuristic) behavior preferences as the basis for target dynamics to be maintained. Experimental data shows that these targets need not be directly achievable via the agents actions. However, the ratio between EMT performance and achievability of target dynamics remains to be explored. The tag game experiment data also revealed the different emphasis DBC and POMDPs place on the formulation of an environment state space. POMDPs rely entirely on the mechanism of reward accumulation maximization, i.e., formation of the action selection procedure to achieve necessary state sequencing. DBC, on the other hand, has two sources of sequencing specification: through the properties of an action selection procedure, and through direct specification within the target dynamics. The importance of the second source was underlined by the Tag Game experiment data, in which the greedy EMT algorithm, applied to a POMDP-type state space specification, failed, since target description over such a state space was incapable of encoding the necessary behavior tendencies, e.g., tie-breaking and commitment to directed motion. The structural differences between DBC (and EMT in particular), and POMDPs, prohibits direct performance comparison, and places them on complementary tracks, each within a suitable niche. For instance, POMDPs could be seen as a much more natural formulation of economic sequential decision-making problems, while EMT is a better fit for continual demand for stochastic change, as happens in many robotic or embodied-agent problems. The complementary properties of POMDPs and EMT can be further exploited. There is recent interest in using POMDPs in hybrid solutions [17], in which the POMDPs can be used together with other control approaches to provide results not easily achievable with either approach by itself. DBC can be an effective partner in such a hybrid solution. For instance, POMDPs have prohibitively large off-line time requirements for policy computation, but can be readily used in simpler settings to expose beneficial behavioral trends; this can serve as a form of target dynamics that are provided to EMT in a larger domain for on-line operation. 7. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework. DBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment. Optimality of DBC plans of action are measured The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Dead−ends 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Arena 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Circle Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. 0 10 20 30 40 50 60 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Dead−ends 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Arena 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Circle Figure 5: Observation Model II: quarry not observed at hunters position. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. with respect to the deviation of actual system dynamics from the target dynamics. We show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure. Since EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain. As enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference. This prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]). However, DBC in general has no such limitations, and readily enables the formulation of evasion games. In future work, we intend to proceed with the development of dynamics-based controllers for these problems. 8. ACKNOWLEDGMENT The work of the first two authors was partially supported by Israel Science Foundation grant #898/05, and the third author was partially supported by a grant from Israels Ministry of Science and Technology. 9. REFERENCES [1] R. C. Arkin. Behavior-Based Robotics. MIT Press, 1998. [2] J. A. Bilmes. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and Hidden Markov Models. Technical Report TR-97-021, Department of Electrical Engeineering and Computer Science, University of California at Berkeley, 1998. [3] T. M. Cover and J. A. Thomas. Elements of information theory. Wiley, 1991. [4] M. E. desJardins, E. H. Durfee, C. L. Ortiz, and M. J. Wolverton. A survey of research in distributed, continual planning. AI Magazine, 4:13-22, 1999. [5] V. R. Konda and J. N. Tsitsiklis. Actor-Critic algorithms. SIAM Journal on Control and Optimization, 42(4):1143-1166, 2003. [6] W. S. Lim. A rendezvous-evasion game on discrete locations with joint randomization. Advances in Applied Probability, 29(4):1004-1017, December 1997. [7] M. L. Littman, T. L. Dean, and L. P. Kaelbling. On the complexity of solving Markov decision problems. In Proceedings of the 11th Annual Conference on Uncertainty in Artificial Intelligence (UAI-95), pages 394-402, 1995. [8] O. Madani, S. Hanks, and A. Condon. On the undecidability of probabilistic planning and related stochastic optimization problems. Artificial Intelligence Journal, 147(1-2):5-34, July 2003. [9] R. M. Neal and G. E. Hinton. A view of the EM algorithm 796 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355-368. Kluwer Academic Publishers, 1998. [10] P. Paruchuri, M. Tambe, F. Ordonez, and S. Kraus. Security in multiagent systems by policy randomization. In Proceeding of AAMAS 2006, 2006. [11] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for pomdps. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1025-1032, August 2003. [12] M. L. Puterman. Markov Decision Processes. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics Section. Wiley-Interscience Publication, New York, 1994. [13] Z. Rabinovich and J. S. Rosenschein. Extended Markov Tracking with an application to control. In The Workshop on Agent Tracking: Modeling Other Agents from Observations, at the Third International Joint Conference on Autonomous Agents and Multiagent Systems, pages 95-100, New-York, July 2004. [14] Z. Rabinovich and J. S. Rosenschein. Multiagent coordination by Extended Markov Tracking. In The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pages 431-438, Utrecht, The Netherlands, July 2005. [15] Z. Rabinovich and J. S. Rosenschein. On the response of EMT-based control to interacting targets and models. In The Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, pages 465-470, Hakodate, Japan, May 2006. [16] R. F. Stengel. Optimal Control and Estimation. Dover Publications, 1994. [17] M. Tambe, E. Bowring, H. Jung, G. Kaminka, R. Maheswaran, J. Marecki, J. Modi, R. Nair, J. Pearce, P. Paruchuri, D. Pynadath, P. Scerri, N. Schurr, and P. Varakantham. Conflicts in teamwork: Hybrids to the The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 797
Dynamics Based Control with an Application to Area-Sweeping Problems ABSTRACT In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. 1. INTRODUCTION Planning and control constitutes a central research area in multiagent systems and artificial intelligence. In recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments. In this framework, the planning and control problem is often addressed by imposing a reward function, and computing a policy (of choosing actions) that is optimal, in the sense that it will result in the highest expected utility. While theoretically attractive, the complexity of optimally solving a POMDP is prohibitive [8, 7]. We take an alternative view of planning in stochastic environments. We do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics. The idea here is to view planexecution as a process that compels a (stochastic) system to change, and a plan as a dynamic process that shapes that change according to desired criteria. We call this general planning framework Dynamics Based Control (DBC). In DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics. As actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16]. Here, optimality is measured in terms of probability of deviation magnitudes. In this paper, we present the structure of Dynamics Based Control. We show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC. EMT is an efficient instantiation of DBC. To evaluate DBC, we carried out a set of experiments applying multi-target EMT to the Tag Game [11]; this is a variant on the area sweeping problem, where an agent is trying to "tag" a moving target (quarry) whose position is not known with certainty. Experimental data demonstrates that even with a simple model of the environment and a simple design of target dynamics, high success rates can be produced both in catching the quarry, and in surprising the quarry (as expressed by the observed entropy of the controlled agent's position). The paper is organized as follows. In Section 2 we motivate DBC using area-sweeping problems, and discuss related work. Section 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments. This is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4. That section also discusses the limitations of EMT-based control relative to the general DBC framework. Experimental settings and results are then presented in Section 5. Section 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work. 2. MOTIVATION AND RELATED WORK Many real-life scenarios naturally have a stochastic target dynamics specification, especially those domains where there exists no ultimate goal, but rather system behavior (with specific properties) that has to be continually supported. For example, security guards perform persistent sweeps of an area to detect any sign of intrusion. Cunning thieves will attempt to track these sweeps, and time their operation to key points of the guards' motion. It is thus advisable to make the guards' motion dynamics appear irregular and random. Recent work by Paruchuri et al. [10] has addressed such randomization in the context of single-agent and distributed POMDPs. The goal in that work was to generate policies that provide a measure of action-selection randomization, while maintaining rewards within some acceptable levels. Our focus differs from this work in that DBC does not optimize expected rewards--indeed we do not consider rewards at all--but instead maintains desired dynamics (including, but not limited to, randomization). The Game of Tag is another example of the applicability of the approach. It was introduced in the work by Pineau et al. [11]. There are two agents that can move about an area, which is divided into a grid. The grid may have blocked cells (holes) into which no agent can move. One agent (the hunter) seeks to move into a cell occupied by the other (the quarry), such that they are co-located (this is a "successful tag"). The quarry seeks to avoid the hunter agent, and is always aware of the hunter's position, but does not know how the hunter will behave, which opens up the possibility for a hunter to surprise the prey. The hunter knows the quarry's probabilistic law of motion, but does not know its current location. Tag is an instance of a family of area-sweeping (pursuit-evasion) problems. In [11], the hunter modeled the problem using a POMDP. A reward function was defined, to reflect the desire to tag the quarry, and an action policy was computed to optimize the reward collected over time. Due to the intractable complexity of determining the optimal policy, the action policy computed in that paper was essentially an approximation. In this paper, instead of formulating a reward function, we use EMT to solve the problem, by directly specifying the target dynamics. In fact, any search problem with randomized motion, the socalled class of area sweeping problems, can be described through specification of such target system dynamics. Dynamics Based Control provides a natural approach to solving these problems. 3. DYNAMICS BASED CONTROL The specification of Dynamics Based Control (DBC) can be broken into three interacting levels: Environment Design Level, User Level, and Agent Level. • Environment Design Level is concerned with the formal specification and modeling of the environment. For example, this level would specify the laws of physics within the system, and set its parameters, such as the gravitation constant. • User Level in turn relies on the environment model produced by Environment Design to specify the target system dynamics it wishes to observe. The User Level also specifies the estimation or learning procedure for system dynamics, and the measure of deviation. In the museum guard scenario above, these would correspond to a stochastic sweep schedule, and a measure of relative surprise between the specified and actual sweeping. • Agent Level in turn combines the environment model from the Environment Design level, the dynamics estimation procedure, the deviation measure and the target dynamics specification from User Level, to produce a sequence of actions that create system dynamics as close as possible to the targeted specification. As we are interested in the continual development of a stochastic system, such as happens in classical control theory [16] and continual planning [4], as well as in our example of museum sweeps, the question becomes how the Agent Level is to treat the deviation measurements over time. To this end, we use a probability threshold--that is, we would like the Agent Level to maximize the probability that the deviation measure will remain below a certain threshold. Specific action selection then depends on system formalization. One possibility would be to create a mixture of available system trends, much like that which happens in Behavior-Based Robotic architectures [1]. The other alternative would be to rely on the estimation procedure provided by the User Level--to utilize the Environment Design Level model of the environment to choose actions, so as to manipulate the dynamics estimator into believing that a certain dynamics has been achieved. Notice that this manipulation is not direct, but via the environment. Thus, for strong enough estimator algorithms, successful manipulation would mean a successful simulation of the specified target dynamics (i.e., beyond discerning via the available sensory input). DBC levels can also have a back-flow of information (see Figure 1). For instance, the Agent Level could provide data about target dynamics feasibility, allowing the User Level to modify the requirement, perhaps focusing on attainable features of system behavior. Data would also be available about the system response to different actions performed; combined with a dynamics estimator defined by the User Level, this can provide an important tool for the environment model calibration at the Environment Design Level. Figure 1: Data flow of the DBC framework Extending upon the idea of Actor-Critic algorithms [5], DBC data flow can provide a good basis for the design of a learning algorithm. For example, the User Level can operate as an exploratory device for a learning algorithm, inferring an ideal dynamics target from the environment model at hand that would expose and verify most critical features of system behavior. In this case, feasibility and system response data from the Agent Level would provide key information for an environment model update. In fact, the combination of feasibility and response data can provide a basis for the application of strong learning algorithms such as EM [2, 9]. 3.1 DBC for Markovian Environments For a Partially Observable Markovian Environment, DBC can be specified in a more rigorous manner. Notice how DBC discards rewards, and replaces it by another optimality criterion (structural differences are summarized in Table 1): • Environment Design level is to specify a tuple <S, A, T, O, Ω, s0>, where:--S is the set of all possible environment states;--s0 is the initial state of the environment (which can also be viewed as a probability distribution over S); The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791 ity that the environment will move from state s to state s' under action a;--O is the set of all possible observations. This is what the sensor input would look like for an outside observer; observe o given that the environment has moved from state s to state s' under action a. • User Level, in the case of Markovian environment, operates on the set of system dynamics described by a family of conditional probabilities F = {τ: S × A--* Π (S)}. Thus specification of target dynamics can be expressed by q ∈ F, and the learning or tracking algorithm can be represented as a function L: O × (A × O) *--* F; that is, it maps sequences of observations and actions performed so far into an estimate τ ∈ F of system dynamics. There are many possible variations available at the User Level to define divergence between system dynamics; several of them are: Notice that the latter two are not actually metrics over the space of possible distributions, but nevertheless have meaningful and important interpretations. For instance, KullbackLeibler divergence is an important tool of information theory [3] that allows one to measure the "price" of encoding an information source governed by q, while assuming that it is governed by p. The User Level also defines the threshold of dynamics deviation probability θ. • Agent Level is then faced with a problem of selecting a control signal function a * to satisfy a minimization problem as follows: where d (τa, q) is a random variable describing deviation of the dynamics estimate τa, created by L under control signal a, from the ideal dynamics q. Implicit in this minimization problem is that L is manipulated via the environment, based on the environment model produced by the Environment Design Level. 3.2 DBC View of the State Space It is important to note the complementary view that DBC and POMDPs take on the state space of the environment. POMDPs regard state as a stationary snap-shot of the environment; whatever attributes of state sequencing one seeks are reached through properties of the control process, in this case reward accumulation. This can be viewed as if the sequencing of states and the attributes of that sequencing are only introduced by and for the controlling mechanism--the POMDP policy. DBC concentrates on the underlying principle of state sequencing, the system dynamics. DBC's target dynamics specification can use the environment's state space as a means to describe, discern, and preserve changes that occur within the system. As a result, DBC has a greater ability to express state sequencing properties, which are grounded in the environment model and its state space definition. For example, consider the task of moving through rough terrain towards a goal and reaching it as fast as possible. POMDPs would encode terrain as state space points, while speed would be ensured by negative reward for every step taken without reaching the goal--accumulating higher reward can be reached only by faster motion. Alternatively, the state space could directly include the notion of speed. For POMDPs, this would mean that the same concept is encoded twice, in some sense: directly in the state space, and indirectly within reward accumulation. Now, even if the reward function would encode more, and finer, details of the properties of motion, the POMDP solution will have to search in a much larger space of policies, while still being guided by the implicit concept of the reward accumulation procedure. On the other hand, the tactical target expression of variations and correlations between position and speed of motion is now grounded in the state space representation. In this situation, any further constraints, e.g., smoothness of motion, speed limits in different locations, or speed reductions during sharp turns, are explicitly and uniformly expressed by the tactical target, and can result in faster and more effective action selection by a DBC algorithm. 4. EMT-BASED CONTROL AS A DBC Recently, a control algorithm was introduced called EMT-based Control [13], which instantiates the DBC framework. Although it provides an approximate greedy solution in the DBC sense, initial experiments using EMT-based control have been encouraging [14, 15]. EMT-based control is based on the Markovian environment definition, as in the case with POMDPs, but its User and Agent Levels are of the Markovian DBC type of optimality. • User Level of EMT-based control defines a limited-case target system dynamics independent of action: It then utilizes the Kullback-Leibler divergence measure to compose a momentary system dynamics estimator--the Extended Markov Tracking (EMT) algorithm. The algorithm keeps a system dynamics estimate τt EMT that is capable of explaining recent change in an auxiliary Bayesian system state estimator from pt-1 to pt, and updates it conservatively using Kullback-Leibler divergence. Since τt EMT and pt-1, t are respectively the conditional and marginal probabilities over the system's state space, "explanation" simply means that and the dynamics estimate update is performed by solving a--Trace distance or L1 distance between two distributions p and q defined by 792 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: Structure of POMDP vs. Dynamics-Based Control in Markovian Environment performance with respect to a given target. If these preference vectors are normalized, they can be combined into a single unified preference. This requires replacement of standard EMT-based action selection by the algorithm below [15]: • Given: -- a set of target dynamics {qk} Kk = 1,--vector of weights w (k) • Select action as follows • Agent Level in EMT-based control is suboptimal with respect to DBC (though it remains within the DBC framework), performing greedy action selection based on prediction of EMT's reaction. The prediction is based on the environment model provided by the Environment Design level, so that if we denote by Ta the environment's transition function limited to action a, and pt − 1 is the auxiliary Bayesian system state estimator, then the EMT-based control choice is described by Note that this follows the Markovian DBC framework precisely: the rewarding optimality of POMDPs is discarded, and in its place a dynamics estimator (EMT in this case) is manipulated via action effects on the environment to produce an estimate close to the specified target system dynamics. Yet as we mentioned, naive EMTbased control is suboptimal in the DBC sense, and has several additional limitations that do not exist in the general DBC framework (discussed in Section 4.2). 4.1 Multi-Target EMT At times, there may exist several behavioral preferences. For example, in the case of museum guards, some art items are more heavily guarded, requiring that the guards stay more often in their close vicinity. On the other hand, no corner of the museum is to be left unchecked, which demands constant motion. Successful museum security would demand that the guards adhere to, and balance, both of these behaviors. For EMT-based control, this would mean facing several tactical targets {qk} Kk = 1, and the question becomes how to merge and balance them. A balancing mechanism can be applied to resolve this issue. Note that EMT-based control, while selecting an action, creates a preference vector over the set of actions based on their predicted--For each action a ∈ A predict the future state distribution ¯ pat +1 = Ta ∗ pt;--For each action, compute The weights vector w ~ = (w1,..., wK) allows the additional "tuning of importance" among target dynamics without the need to redesign the targets themselves. This balancing method is also seamlessly integrated into the EMT-based control flow of operation. 4.2 EMT-based Control Limitations EMT-based control is a sub-optimal (in the DBC sense) representative of the DBC structure. It limits the User by forcing EMT to be its dynamic tracking algorithm, and replaces Agent optimization by greedy action selection. This kind of combination, however, is common for on-line algorithms. Although further development of EMT-based controllers is necessary, evidence so far suggests that even the simplest form of the algorithm possesses a great deal of power, and displays trends that are optimal in the DBC sense of the word. There are two further, EMT-specific, limitations to EMT-based control that are evident at this point. Both already have partial solutions and are subjects of ongoing research. The first limitation is the problem of negative preference. In the POMDP framework for example, this is captured simply, through The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793 the appearance of values with different signs within the reward structure. For EMT-based control, however, negative preference means that one would like to avoid a certain distribution over system development sequences; EMT-based control, however, concentrates on getting as close as possible to a distribution. Avoidance is thus unnatural in native EMT-based control. The second limitation comes from the fact that standard environment modeling can create pure sensory actions--actions that do not change the state of the world, and differ only in the way observations are received and the quality of observations received. Since the world state does not change, EMT-based control would not be able to differentiate between different sensory actions. Notice that both of these limitations of EMT-based control are absent from the general DBC framework, since it may have a tracking algorithm capable of considering pure sensory actions and, unlike Kullback-Leibler divergence, a distribution deviation measure that is capable of dealing with negative preference. 5. EMT PLAYING TAG The Game of Tag was first introduced in [11]. It is a single agent problem of capturing a quarry, and belongs to the class of area sweeping problems. An example domain is shown in Figure 2. Figure 2: Tag domain; an agent (A) attempts to seek and capture a quarry (Q) The Game of Tag extremely limits the agent's perception, so that the agent is able to detect the quarry only if they are co-located in the same cell of the grid world. In the classical version of the game, co-location leads to a special observation, and the  Tag' action can be performed. We slightly modify this setting: the moment that both agents occupy the same cell, the game ends. As a result, both the agent and its quarry have the same motion capability, which allows them to move in four directions, North, South, East, and West. These form a formal space of actions within a Markovian environment. The state space of the formal Markovian environment is described by the cross-product of the agent and quarry's positions. For Figure 2, it would be S = {s0,..., s23} × {s0,..., s23}. The effects of an action taken by the agent are deterministic, but the environment in general has a stochastic response due to the motion of the quarry. With probability q01 it stays put, and with probability 1 − q0 it moves to an adjacent cell further away from the 1In our experiments this was taken to be q0 = 0.2. agent. So for the instance shown in Figure 2 and q0 = 0.1: Although the evasive behavior of the quarry is known to the agent, the quarry's position is not. The only sensory information available to the agent is its own location. We use EMT and directly specify the target dynamics. For the Game of Tag, we can easily formulate three major trends: catching the quarry, staying mobile, and stalking the quarry. This results in the following three target dynamics: Note that none of the above targets are directly achievable; for instance, if Qt = s9 and At = s11, there is no action that can move the agent to At +1 = s9 as required by the Tcatch target dynamics. We ran several experiments to evaluate EMT performance in the Tag Game. Three configurations of the domain shown in Figure 3 were used, each posing a different challenge to the agent due to partial observability. In each setting, a set of 1000 runs was performed with a time limit of 100 steps. In every run, the initial position of both the agent and its quarry was selected at random; this means that as far as the agent was concerned, the quarry's initial position was uniformly distributed over the entire domain cell space. We also used two variations of the environment observability function. In the first version, observability function mapped all joint positions of hunter and quarry into the position of the hunter as an observation. In the second, only those joint positions in which hunter and quarry occupied different locations were mapped into the hunter's location. The second version in fact utilized and expressed the fact that once hunter and quarry occupy the same cell the game ends. The results of these experiments are shown in Table 2. Balancing [15] the catch, move, and stalk target dynamics described in the previous section by the weight vector [0.8, 0.1, 0.1], EMT produced stable performance in all three domains. Although direct comparisons are difficult to make, the EMT performance displayed notable efficiency vis -  a-vis the POMDP approach. In spite of a simple and inefficient Matlab implementation of the EMT algorithm, the decision time for any given step averaged significantly below 1 second in all experiments. For the irregular open arena domain, which proved to be the most difficult, 1000 experiment runs bounded by 100 steps each, a total of 42411 steps, were completed in slightly under 6 hours. That is, over 4 × 104 online steps took an order of magnitude less time than the offline computation of POMDP policy in [11]. The significance of this differential is made even more prominent by the fact that, should the environment model parameters change, the online nature of EMT would allow it to maintain its performance time, while the POMDP policy would need to be recomputed, requiring yet again a large overhead of computation. We also tested the behavior cell frequency entropy, empirical measures from trial data. As Figure 4 and Figure 5 show, empir Figure 3: These configurations of the Tag Game space were used: a) multiple dead-end, b) irregular open arena, c) circular corridor Table 2: Performance of the EMT-based solution in three Tag Game domains and two observability models: I) omniposition quarry, II) quarry is not at hunter's position ical entropy grows with the length of interaction. For runs where the quarry was not captured immediately, the entropy reaches between 0.85 and 0.952 for different runs and scenarios. As the agent actively seeks the quarry, the entropy never reaches its maximum. One characteristic of the entropy graph for the open arena scenario particularly caught our attention in the case of the omniposition quarry observation model. Near the maximum limit of trial length (100 steps), entropy suddenly dropped. Further analysis of the data showed that under certain circumstances, a fluctuating behavior occurs in which the agent faces equally viable versions of quarry-following behavior. Since the EMT algorithm has greedy action selection, and the state space does not encode any form of commitment (not even speed or acceleration), the agent is locked within a small portion of cells. It is essentially attempting to simultaneously follow several courses of action, all of which are consistent with the target dynamics. This behavior did not occur in our second observation model, since it significantly reduced the set of eligible courses of action--essentially contributing to tie-breaking among them. 6. DISCUSSION The design of the EMT solution for the Tag Game exposes the core difference in approach to planning and control between EMT or DBC, on the one hand, and the more familiar POMDP approach, on the other. POMDP defines a reward structure to optimize, and influences system dynamics indirectly through that optimization. EMT discards any reward scheme, and instead measures and influences system dynamics directly. 2Entropy was calculated using log base equal to the number of possible locations within the domain; this properly scales entropy expression into the range [0, 1] for all domains. Thus for the Tag Game, we did not search for a reward function that would encode and express our preference over the agent's behavior, but rather directly set three (heuristic) behavior preferences as the basis for target dynamics to be maintained. Experimental data shows that these targets need not be directly achievable via the agent's actions. However, the ratio between EMT performance and achievability of target dynamics remains to be explored. The tag game experiment data also revealed the different emphasis DBC and POMDPs place on the formulation of an environment state space. POMDPs rely entirely on the mechanism of reward accumulation maximization, i.e., formation of the action selection procedure to achieve necessary state sequencing. DBC, on the other hand, has two sources of sequencing specification: through the properties of an action selection procedure, and through direct specification within the target dynamics. The importance of the second source was underlined by the Tag Game experiment data, in which the greedy EMT algorithm, applied to a POMDP-type state space specification, failed, since target description over such a state space was incapable of encoding the necessary behavior tendencies, e.g., tie-breaking and commitment to directed motion. The structural differences between DBC (and EMT in particular), and POMDPs, prohibits direct performance comparison, and places them on complementary tracks, each within a suitable niche. For instance, POMDPs could be seen as a much more natural formulation of economic sequential decision-making problems, while EMT is a better fit for continual demand for stochastic change, as happens in many robotic or embodied-agent problems. The complementary properties of POMDPs and EMT can be further exploited. There is recent interest in using POMDPs in hybrid solutions [17], in which the POMDPs can be used together with other control approaches to provide results not easily achievable with either approach by itself. DBC can be an effective partner in such a hybrid solution. For instance, POMDPs have prohibitively large off-line time requirements for policy computation, but can be readily used in simpler settings to expose beneficial behavioral trends; this can serve as a form of target dynamics that are provided to EMT in a larger domain for on-line operation. 7. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework. DBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment. Optimality of DBC plans of action are measured The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795 Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. Figure 5: Observation Model II: quarry not observed at hunter's position. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. with respect to the deviation of actual system dynamics from the target dynamics. We show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure. Since EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain. As enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference. This prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]). However, DBC in general has no such limitations, and readily enables the formulation of evasion games. In future work, we intend to proceed with the development of dynamics-based controllers for these problems.
Dynamics Based Control with an Application to Area-Sweeping Problems ABSTRACT In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. 1. INTRODUCTION Planning and control constitutes a central research area in multiagent systems and artificial intelligence. In recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments. In this framework, the planning and control problem is often addressed by imposing a reward function, and computing a policy (of choosing actions) that is optimal, in the sense that it will result in the highest expected utility. While theoretically attractive, the complexity of optimally solving a POMDP is prohibitive [8, 7]. We take an alternative view of planning in stochastic environments. We do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics. The idea here is to view planexecution as a process that compels a (stochastic) system to change, and a plan as a dynamic process that shapes that change according to desired criteria. We call this general planning framework Dynamics Based Control (DBC). In DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics. As actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16]. Here, optimality is measured in terms of probability of deviation magnitudes. In this paper, we present the structure of Dynamics Based Control. We show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC. EMT is an efficient instantiation of DBC. To evaluate DBC, we carried out a set of experiments applying multi-target EMT to the Tag Game [11]; this is a variant on the area sweeping problem, where an agent is trying to "tag" a moving target (quarry) whose position is not known with certainty. Experimental data demonstrates that even with a simple model of the environment and a simple design of target dynamics, high success rates can be produced both in catching the quarry, and in surprising the quarry (as expressed by the observed entropy of the controlled agent's position). The paper is organized as follows. In Section 2 we motivate DBC using area-sweeping problems, and discuss related work. Section 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments. This is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4. That section also discusses the limitations of EMT-based control relative to the general DBC framework. Experimental settings and results are then presented in Section 5. Section 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work. 2. MOTIVATION AND RELATED WORK 3. DYNAMICS BASED CONTROL 3.1 DBC for Markovian Environments The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791 3.2 DBC View of the State Space 4. EMT-BASED CONTROL AS A DBC 792 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.1 Multi-Target EMT 4.2 EMT-based Control Limitations The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793 5. EMT PLAYING TAG 6. DISCUSSION 7. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework. DBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment. Optimality of DBC plans of action are measured The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795 Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. Figure 5: Observation Model II: quarry not observed at hunter's position. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. with respect to the deviation of actual system dynamics from the target dynamics. We show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure. Since EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain. As enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference. This prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]). However, DBC in general has no such limitations, and readily enables the formulation of evasion games. In future work, we intend to proceed with the development of dynamics-based controllers for these problems.
Dynamics Based Control with an Application to Area-Sweeping Problems ABSTRACT In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. 1. INTRODUCTION Planning and control constitutes a central research area in multiagent systems and artificial intelligence. In recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments. In this framework, the planning and control problem is often We take an alternative view of planning in stochastic environments. We do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics. We call this general planning framework Dynamics Based Control (DBC). In DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics. As actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16]. Here, optimality is measured in terms of probability of deviation magnitudes. In this paper, we present the structure of Dynamics Based Control. We show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC. EMT is an efficient instantiation of DBC. The paper is organized as follows. In Section 2 we motivate DBC using area-sweeping problems, and discuss related work. Section 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments. This is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4. That section also discusses the limitations of EMT-based control relative to the general DBC framework. Experimental settings and results are then presented in Section 5. Section 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work. 7. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework. DBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment. Optimality of DBC plans of action are measured The Sixth Intl. . Joint Conf. Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. Figure 5: Observation Model II: quarry not observed at hunter's position. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. with respect to the deviation of actual system dynamics from the target dynamics. We show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure. As enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference. This prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]). However, DBC in general has no such limitations, and readily enables the formulation of evasion games. In future work, we intend to proceed with the development of dynamics-based controllers for these problems.
I-42
A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements
Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP.
[ "distribut constraint optim", "pseudotre arrang", "agent", "maximum sequenti path cost", "cross-edg pseudotre", "multi-agent system", "edg-travers heurist", "job shop schedul", "resourc alloc", "teamwork coordin", "multi-valu util function", "global util", "distribut constraint satisfact and optim", "multi-agent coordin" ]
[ "P", "P", "P", "P", "P", "M", "M", "U", "U", "U", "U", "U", "M", "U" ]
A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements∗ James Atlas Computer and Information Sciences University of Delaware Newark, DE 19716 atlas@cis.udel.edu Keith Decker Computer and Information Sciences University of Delaware Newark, DE 19716 decker@cis.udel.edu ABSTRACT Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent Systems General Terms Algorithms 1. INTRODUCTION Many historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP). With the advent of distributed AI, multi-agent systems became a popular way to model the complex interactions and coordination required to solve distributed problems. CSPs were originally extended to distributed agent environments in [9]. Early domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2]. Many domains for agent systems, especially teamwork coordination, distributed scheduling, and sensor networks, involve overly constrained problems that are difficult or impossible to satisfy for every constraint. Recent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions. Instead of finding an assignment that satisfies all constraints, these approaches find an assignment that produces a high level of global utility. This extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1]. Current algorithms that solve complete DCOPs use two main approaches: search and dynamic programming. Search based algorithms that originated from DisCSP typically use some form of backtracking [10] or bounds propagation, as in ADOPT [3]. Dynamic programming based algorithms include DPOP and its extensions [5, 6, 7]. To date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem. It has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree. However, it was also shown that finding the optimal pseudotree was NP-Hard. We began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics. We found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors. We suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes. After exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees. Our hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types. In this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees. We begin with a definition of 741 978-81-904262-7-5 (RPS) c 2007 IFAAMAS DCOP, traditional pseudotrees, and cross-edged pseudotrees. We then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm. We discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics. We then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances. We conclude with a selection of ideas for future work and extensions for DCPOP. 2. PROBLEM DEFINITION DCOP has been formalized in slightly different ways in recent literature, so we will adopt the definition as presented in [6]. A Distributed Constraint Optimization Problem with n nodes and m constraints consists of the tuple < X, D, U > where: • X = {x1,. . ,xn} is a set of variables, each one assigned to a unique agent • D = {d1,. . ,dn} is a set of finite domains for each variable • U = {u1,. . ,um} is a set of utility functions such that each function involves a subset of variables in X and defines a utility for each combination of values among these variables An optimal solution to a DCOP instance consists of an assignment of values in D to X such that the sum of utilities in U is maximal. Problem domains that require minimum cost instead of maximum utility can map costs into negative utilities. The utility functions represent soft constraints but can also represent hard constraints by using arbitrarily large negative values. For this paper we only consider binary utility functions involving two variables. Higher order utility functions can be modeled with minor changes to the algorithm, but they also substantially increase the complexity. 2.1 Traditional Pseudotrees Pseudotrees are a common structure used in search procedures to allow parallel processing of independent branches. As defined in [6], a pseudotree is an arrangement of a graph G into a rooted tree T such that vertices in G that share an edge are in the same branch in T. A back-edge is an edge between a node X and any node which lies on the path from X to the root (excluding Xs parent). Figure 1 shows a pseudotree with four nodes, three edges (A-B, B-C, BD), and one back-edge (A-C). Also defined in [6] are four types of relationships between nodes exist in a pseudotree: • P(X) - the parent of a node X: the single node higher in the pseudotree that is connected to X directly through a tree edge • C(X) - the children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through tree edges • PP(X) - the pseudo-parents of a node X: the set of nodes higher in the pseudotree that are connected to X directly through back-edges (In Figure 1, A = PP(C)) • PC(X) - the pseudo-children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through back-edges (In Figure 1, C = PC(A)) Figure 1: A traditional pseudotree. Solid line edges represent parent-child relationships and the dashed line represents a pseudo-parent-pseudo-child relationship. Figure 2: A cross-edged pseudotree. Solid line edges represent parent-child relationships, the dashed line represents a pseudoparent-pseudo-child relationship, and the dotted line represents a branch-parent-branch-child relationship. The bolded node, B, is the merge point for node E. 2.2 Cross-edged Pseudotrees We define a cross-edge as an edge from node X to a node Y that is above X but not in the path from X to the root. A cross-edged pseudotree is a traditional pseudotree with the addition of cross-edges. Figure 2 shows a cross-edged pseudotree with a cross-edge (D-E). In a cross-edged pseudotree we designate certain edges as primary. The set of primary edges defines a spanning tree of the nodes. The parent, child, pseudo-parent, and pseudo-child relationships from the traditional pseudotree are now defined in the context of this primary edge spanning tree. This definition also yields two additional types of relationships that may exist between nodes: • BP(X) - the branch-parents of a node X: the set of nodes higher in the pseudotree that are connected to X but are not in the primary path from X to the root (In Figure 2, D = BP(E)) • BC(X) - the branch-children of a node X: the set of nodes lower in the pseudotree that are connected to X but are not in any primary path from X to any leaf node (In Figure 2, E = BC(D)) 2.3 Pseudotree Generation 742 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Current algorithms usually have a pre-execution phase to generate a traditional pseudotree from a general DCOP instance. Our DCPOP algorithm generates a cross-edged pseudotree in the same fashion. First, the DCOP instance < X, D, U > translates directly into a graph with X as the set of vertices and an edge for each pair of variables represented in U. Next, various heuristics are used to arrange this graph into a pseudotree. One common heuristic is to perform a guided depth-first search (DFS) as the resulting traversal is a pseudotree, and a DFS can easily be performed in a distributed fashion. We define an edge-traversal based method as any method that produces a pseudotree in which all parent/child pairs share an edge in the original graph. This includes DFS, breadth-first search, and best-first search based traversals. Our heuristics that generate cross-edged pseudotrees use a distributed best-first search traversal. 3. DPOP ALGORITHM The original DPOP algorithm operates in three main phases. The first phase generates a traditional pseudotree from the DCOP instance using a distributed algorithm. The second phase joins utility hypercubes from children and the local node and propagates them towards the root. The third phase chooses an assignment for each domain in a top down fashion beginning with the agent at the root node. The complexity of DPOP depends on the size of the largest computation and utility message during phase two. It has been shown that this size directly corresponds to the induced width of the pseudotree generated in phase one [6]. DPOP uses polynomial time heuristics to generate the pseudotree since finding the minimum induced width pseudotree is NP-hard. Several distributed edgetraversal heuristics have been developed to find low width pseudotrees [8]. At the end of the first phase, each agent knows its parent, children, pseudo-parents, and pseudo-children. 3.1 Utility Propagation Agents located at leaf nodes in the pseudotree begin the process by calculating a local utility hypercube. This hypercube at node X contains summed utilities for each combination of values in the domains for P(X) and PP(X). This hypercube has dimensional size equal to the number of pseudo-parents plus one. A message containing this hypercube is sent to P(X). Agents located at non-leaf nodes wait for all messages from children to arrive. Once the agent at node Y has all utility messages, it calculates its local utility hypercube which includes domains for P(Y), PP(Y), and Y. The local utility hypercube is then joined with all of the hypercubes from the child messages. At this point all utilities involving node Y are known, and the domain for Y may be safely eliminated from the joined hypercube. This elimination process chooses the best utility over the domain of Y for each combination of the remaining domains. A message containing this hypercube is now sent to P(Y). The dimensional size of this hypercube depends on the number of overlapping domains in received messages and the local utility hypercube. This dynamic programming based propagation phase continues until the agent at the root node of the pseudotree has received all messages from its children. 3.2 Value Propagation Value propagation begins when the agent at the root node Z has received all messages from its children. Since Z has no parents or pseudo-parents, it simply combines the utility hypercubes received from its children. The combined hypercube contains only values for the domain for Z. At this point the agent at node Z simply chooses the assignment for its domain that has the best utility. A value propagation message with this assignment is sent to each node in C(Z). Each other node then receives a value propagation message from its parent and chooses the assignment for its domain that has the best utility given the assignments received in the message. The node adds its domain assignment to the assignments it received and passes the set of assignments to its children. The algorithm is complete when all nodes have chosen an assignment for their domain. 4. DCPOP ALGORITHM Our extension to the original DPOP algorithm, shown in Algorithm 1, shares the same three phases. The first phase generates the cross-edged pseudotree for the DCOP instance. The second phase merges branches and propagates the utility hypercubes. The third phase chooses assignments for domains at branch merge points and in a top down fashion, beginning with the agent at the root node. For the first phase we generate a pseudotree using several distributed heuristics and select the one with lowest overall complexity. The complexity of the computation and utility message size in DCPOP does not directly correspond to the induced width of the cross-edged pseudotree. Instead, we use a polynomial time method for calculating the maximum computation and utility message size for a given cross-edged pseudotree. A description of this method and the pseudotree selection process appears in Section 5. At the end of the first phase, each agent knows its parent, children, pseudo-parents, pseudo-children, branch-parents, and branch-children. 4.1 Merging Branches and Utility Propagation In the original DPOP algorithm a node X only had utility functions involving its parent and its pseudo-parents. In DCPOP, a node X is allowed to have a utility function involving a branch-parent. The concept of a branch can be seen in Figure 2 with node E representing our node X. The two distinct paths from node E to node B are called branches of E. The single node where all branches of E meet is node B, which is called the merge point of E. Agents with nodes that have branch-parents begin by sending a utility propagation message to each branch-parent. This message includes a two dimensional utility hypercube with domains for the node X and the branch-parent BP(X). It also includes a branch information structure which contains the origination node of the branch, X, the total number of branches originating from X, and the number of branches originating from X that are merged into a single representation by this branch information structure (this number starts at 1). Intuitively when the number of merged branches equals the total number of originating branches, the algorithm has reached the merge point for X. In Figure 2, node E sends a utility propagation message to its branch-parent, node D. This message has dimensions for the domains of E and D, and includes branch information with an origin of E, 2 total branches, and 1 merged branch. As in the original DPOP utility propagation phase, an agent at leaf node X sends a utility propagation message to its parent. In DCPOP this message contains dimensions for the domains of P(X) and PP(X). If node X also has branch-parents, then the utility propagation message also contains a dimension for the domain of X, and will include a branch information structure. In Figure 2, node E sends a utility propagation message to its parent, node C. This message has dimensions for the domains of E and C, and includes branch information with an origin of E, 2 total branches, and 1 merged branch. When a node Y receives utility propagation messages from all of The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 743 its children and branch-children, it merges any branches with the same origination node X. The merged branch information structure accumulates the number of merged branches for X. If the cumulative total number of merged branches equals the total number of branches, then Y is the merge point for X. This means that the utility hypercubes present at Y contain all information about the valuations for utility functions involving node X. In addition to the typical elimination of the domain of Y from the utility hypercubes, we can now safely eliminate the domain of X from the utility hypercubes. To illustrate this process, we will examine what happens in the second phase for node B in Figure 2. In the second phase Node B receives two utility propagation messages. The first comes from node C and includes dimensions for domains E, B, and A. It also has a branch information structure with origin of E, 2 total branches, and 1 merged branch. The second comes from node D and includes dimensions for domains E and B. It also has a branch information structure with origin of E, 2 total branches, and 1 merged branch. Node B then merges the branch information structures from both messages because they have the same origination, node E. Since the number of merged branches originating from E is now 2 and the total branches originating from E is 2, node B now eliminates the dimensions for domain E. Node B also eliminates the dimension for its own domain, leaving only information about domain A. Node B then sends a utility propagation message to node A, containing only one dimension for the domain of A. Although not possible in DPOP, this method of utility propagation and dimension elimination may produce hypercubes at node Y that do not share any domains. In DCPOP we do not join domain independent hypercubes, but instead may send multiple hypercubes in the utility propagation message sent to the parent of Y. This lazy approach to joins helps to reduce message sizes. 4.2 Value Propagation As in DPOP, value propagation begins when the agent at the root node Z has received all messages from its children. At this point the agent at node Z chooses the assignment for its domain that has the best utility. If Z is the merge point for the branches of some node X, Z will also choose the assignment for the domain of X. Thus any node that is a merge point will choose assignments for a domain other than its own. These assignments are then passed down the primary edge hierarchy. If node X in the hierarchy has branch-parents, then the value assignment message from P(X) will contain an assignment for the domain of X. Every node in the hierarchy adds any assignments it has chosen to the ones it received and passes the set of assignments to its children. The algorithm is complete when all nodes have chosen or received an assignment for their domain. 4.3 Proof of Correctness We will prove the correctness of DCPOP by first noting that DCPOP fully extends DPOP and then examining the two cases for value assignment in DCPOP. Given a traditional pseudotree as input, the DCPOP algorithm execution is identical to DPOP. Using a traditional pseudotree arrangement no nodes have branch-parents or branch-children since all edges are either back-edges or tree edges. Thus the DCPOP algorithm using a traditional pseudotree sends only utility propagation messages that contain domains belonging to the parent or pseudo-parents of a node. Since no node has any branch-parents, no branches exist, and thus no node serves as a merge point for any other node. Thus all value propagation assignments are chosen at the node of the assignment domain. For DCPOP execution with cross-edged pseudotrees, some nodes serve as merge points. We note that any node X that is not a merge point assigns its value exactly as in DPOP. The local utility hypercube at X contains domains for X, P(X), PP(X), and BC(X). As in DPOP the value assignment message received at X includes the values assigned to P(X) and PP(X). Also, since X is not a merge point, all assignments to BC(X) must have been calculated at merge points higher in the tree and are in the value assignment message from P(X). Thus after eliminating domains for which assignments are known, only the domain of X is left. The agent at node X can now correctly choose the assignment with maximum utility for its own domain. If node X is a merge point for some branch-child Y, we know that X must be a node along the path from Y to the root, and from P(Y) and all BP(Y) to the root. From the algorithm, we know that Y necessarily has all information from C(Y), PC(Y), and BC(Y) since it waits for their messages. Node X has information about all nodes below it in the tree, which would include Y, P(Y), BP(Y), and those PP(Y) that are below X in the tree. For any PP(Y) above X in the tree, X receives the assignment for the domain of PP(Y) in the value assignment message from P(X). Thus X has utility information about all of the utility functions of which Y is a part. By eliminating domains included in the value assignment message, node X is left with a local utility hypercube with domains for X and Y. The agent at node X can now correctly choose the assignments with maximum utility for the domains of X and Y. 4.4 Complexity Analysis The first phase of DCPOP sends one message to each P(X), PP(X), and BP(X). The second phase sends one value assignment message to each C(X). Thus, DCPOP produces a linear number of messages with respect to the number of edges (utility functions) in the cross-edged pseudotree and the original DCOP instance. The actual complexity of DCPOP depends on two additional measurements: message size and computation size. Message size and computation size in DCPOP depend on the number of overlapping branches as well as the number of overlapping back-edges. It was shown in [6] that the number of overlapping back-edges is equal to the induced width of the pseudotree. In a poorly constructed cross-edged pseudotree, the number of overlapping branches at node X can be as large as the total number of descendants of X. Thus, the total message size in DCPOP in a poorly constructed instance can be space-exponential in the total number of nodes in the graph. However, in practice a well constructed cross-edged pseudotree can achieve much better results. Later we address the issue of choosing well constructed crossedged pseudotrees from a set. We introduce an additional measurement of the maximum sequential path cost through the algorithm. This measurement directly relates to the maximum amount of parallelism achievable by the algorithm. To take this measurement we first store the total computation size for each node during phase two and three. This computation size represents the number of individual accesses to a value in a hypercube at each node. For example, a join between two domains of size 4 costs 4 ∗ 4 = 16. Two directed acyclic graphs (DAG) can then be drawn; one with the utility propagation messages as edges and the phase two costs at nodes, and the other with value assignment messages and the phase three costs at nodes. The maximum sequential path cost is equal to the sum of the longest path on each DAG from the root to any leaf node. 5. HEURISTICS In our assessment of complexity in DCPOP we focused on the worst case possibly produced by the algorithm. We acknowledge 744 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Algorithm 1 DCPOP Algorithm 1: DCPOP(X; D; U) Each agent Xi executes: Phase 1: pseudotree creation 2: elect leader from all Xj ∈ X 3: elected leader initiates pseudotree creation 4: afterwards, Xi knows P(Xi), PP(Xi), BP(Xi), C(Xi), BC(Xi) and PC(Xi) Phase 2: UTIL message propagation 5: if |BP(Xi)| > 0 then 6: BRANCHXi ← |BP(Xi)| + 1 7: for all Xk ∈BP(Xi) do 8: UTILXi (Xk) ←Compute utils(Xi, Xk) 9: Send message(Xk,UTILXi (Xk),BRANCHXi ) 10: if |C(Xi)| = 0(i.e. Xi is a leaf node) then 11: UTILXi (P(Xi)) ← Compute utils(P(Xi),PP(Xi)) for all PP(Xi) 12: Send message(P(Xi), UTILXi (P(Xi)),BRANCHXi ) 13: Send message(PP(Xi), empty UTIL, empty BRANCH) to all PP(Xi) 14: activate UTIL Message handler() Phase 3: VALUE message propagation 15: activate VALUE Message handler() END ALGORITHM UTIL Message handler(Xk,UTILXk (Xi), BRANCHXk ) 16: store UTILXk (Xi),BRANCHXk (Xi) 17: if UTIL messages from all children and branch children arrived then 18: for all Bj ∈BRANCH(Xi) do 19: if Bj is merged then 20: join all hypercubes where Bj ∈UTIL(Xi) 21: eliminate Bj from the joined hypercube 22: if P(Xi) == null (that means Xi is the root) then 23: v ∗ i ← Choose optimal(null) 24: Send VALUE(Xi, v ∗ i) to all C(Xi) 25: else 26: UTILXi (P(Xi)) ← Compute utils(P(Xi), PP(Xi)) 27: Send message(P(Xi),UTILXi (P(Xi)), BRANCHXi (P(Xi))) VALUE Message handler(VALUEXi ,P(Xi)) 28: add all Xk ← v ∗ k ∈VALUEXi ,P(Xi) to agent view 29: Xi ← v ∗ i =Choose optimal(agent view) 30: Send VALUEXl , Xi to all Xl ∈C(Xi) that in real world problems the generation of the pseudotree has a significant impact on the actual performance. The problem of finding the best pseudotree for a given DCOP instance is NP-Hard. Thus a heuristic is used for generation, and the performance of the algorithm depends on the pseudotree found by the heuristic. Some previous research focused on finding heuristics to generate good pseudotrees [8]. While we have developed some heuristics that generate good cross-edged pseudotrees for use with DCPOP, our focus has been to use multiple heuristics and then select the best pseudotree from the generated pseudotrees. We consider only heuristics that run in polynomial time with respect to the number of nodes in the original DCOP instance. The actual DCPOP algorithm has worst case exponential complexity, but we can calculate the maximum message size, computation size, and sequential path cost for a given cross-edged pseudotree in linear space-time complexity. To do this, we simply run the algorithm without attempting to calculate any of the local utility hypercubes or optimal value assignments. Instead, messages include dimensional and branch information but no utility hypercubes. After each heuristic completes its generation of a pseudotree, we execute the measurement procedure and propagate the measurement information up to the chosen root in that pseudotree. The root then broadcasts the total complexity for that heuristic to all nodes. After all heuristics have had a chance to complete, every node knows which heuristic produced the best pseudotree. Each node then proceeds to begin the DCPOP algorithm using its knowledge of the pseudotree generated by the best heuristic. The heuristics used to generate traditional pseudotrees perform a distributed DFS traversal. The general distributed algorithm uses a token passing mechanism and a linear number of messages. Improved DFS based heuristics use a special procedure to choose the root node, and also provide an ordering function over the neighbors of a node to determine the order of path recursion. The DFS based heuristics used in our experiments come from the work done in [4, 8]. 5.1 The best-first cross-edged pseudotree heuristic The heuristics used to generate cross-edged pseudotrees perform a best-first traversal. A general distributed best-first algorithm for node expansion is presented in Algorithm 2. An evaluation function at each node provides the values that are used to determine the next best node to expand. Note that in this algorithm each node only exchanges its best value with its neighbors. In our experiments we used several evaluation functions that took as arguments an ordered list of ancestors and a node, which contains a list of neighbors (with each neighbors placement depth in the tree if it was placed). From these we can calculate branchparents, branch-children, and unknown relationships for a potential node placement. The best overall function calculated the value as ancestors−(branchparents+branchchildren) with the number of unknown relationships being a tiebreak. After completion each node has knowledge of its parent and ancestors, so it can easily determine which connected nodes are pseudo-parents, branchparents, pseudo-children, and branch-children. The complexity of the best-first traversal depends on the complexity of the evaluation function. Assuming a complexity of O(V ) for the evaluation function, which is the case for our best overall function, the best-first traversal is O(V · E) which is at worst O(n3 ). For each v ∈ V we perform a place operation, and find the next node to place using the getBestNeighbor operation. The place operation is at most O(V ) because of the sent messages. Finding the next node uses recursion and traverses only already placed The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 745 Algorithm 2 Distributed Best-First Search Algorithm root ← electedleader next(root, ∅) place(node, parent) node.parent ← parent node.ancestors ← parent.ancestors ∪ parent send placement message (node, node.ancestors) to all neighbors of node next(current, previous) if current is not placed then place(current, previous) next(current, ∅) else best ← getBestNeighbor(current, previous) if best = ∅ then if previous = ∅ then terminate, all nodes are placed next(previous, ∅) else next(best, current) getBestNeighbor(current, previous) best ← ∅; score ← 0 for all n ∈ current.neighbors do if n! = previous then if n is placed then nscore ← getBestNeighbor(n, current) else nscore ← evaluate(current, n) if nscore > score then score ← nscore best ← n return best, score nodes, so it has O(V ) recursions. Each recursion performs a recursive getBestNeighbor operation that traverses all placed nodes and their neighbors. This operation is O(V · E), but results can be cached using only O(V ) space at each node. Thus we have O(V ·(V +V +V ·E)) = O(V 2 ·E). If we are smart about evaluating local changes when each node receives placement messages from its neighbors and cache the results the getBestNeighbor operation is only O(E). This increases the complexity of the place operation, but for all placements the total complexity is only O(V · E). Thus we have an overall complexity of O(V ·E+V ·(V +E)) = O(V ·E). 6. COMPARISON OF COMPLEXITY IN DPOP AND DCPOP We have already shown that given the same input, DCPOP performs the same as DPOP. We also have shown that we can accurately predict performance of a given pseudotree in linear spacetime complexity. If we use a constant number of heuristics to generate the set of pseudotrees, we can choose the best pseudotree in linear space-time complexity. We will now show that there exists a DCOP instance for which a cross-edged pseudotree outperforms all possible traditional pseudotrees (based on edge-traversal heuristics). In Figure 3(a) we have a DCOP instance with six nodes. This is a bipartite graph with each partition fully connected to the other (a) (b) (c) Figure 3: (a) The DCOP instance (b) A traditional pseudotree arrangement for the DCOP instance (c) A cross-edged pseudotree arrangement for the DCOP instance partition. In Figure 3(b) we see a traditional pseudotree arrangement for this DCOP instance. It is easy to see that any edgetraversal based heuristic cannot expand two nodes from the same partition in succession. We also see that no node can have more than one child because any such arrangement would be an invalid pseudotree. Thus any traditional pseudotree arrangement for this DCOP instance must take the form of Figure 3(b). We can see that the back-edges F-B and F-A overlap node C. Node C also has a parent E, and a back-edge with D. Using the original DPOP algorithm (or DCPOP since they are identical in this case), we find that the computation at node C involves five domains: A, B, C, D, and E. In contrast, the cross-edged pseudotree arrangement in Figure 3(c) requires only a maximum of four domains in any computation during DCPOP. Since node A is the merge point for branches from both B and C, we can see that each of the nodes D, E, and F have two overlapping branches. In addition each of these nodes has node A as its parent. Using the DCPOP algorithm we find that the computation at node D (or E or F) involves four domains: A, B, C, and D (or E or F). Since no better traditional pseudotree arrangement can be created using an edge-traversal heuristic, we have shown that DCPOP can outperform DPOP even if we use the optimal pseudotree found through edge-traversal. We acknowledge that pseudotree arrangements that allow parent-child relationships without an actual constraint can solve the problem in Figure 3(a) with maximum computation size of four domains. However, current heuristics used with DPOP do not produce such pseudotrees, and such a heuristic would be difficult to distribute since each node would require information about nodes with which it has no constraint. Also, while we do not prove it here, cross-edged pseudotrees can produce smaller message sizes than such pseudotrees even if the computation size is similar. In practice, since finding the best pseudotree arrangement is NP-Hard, we find that heuristics that produce cross-edged pseudotrees often produce significantly smaller computation and message sizes. 7. EXPERIMENTAL RESULTS 746 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Existing performance metrics for DCOP algorithms include the total number of messages, synchronous clock cycles, and message size. We have already shown that the total number of messages is linear with respect to the number of constraints in the DCOP instance. We also introduced the maximum sequential path cost (PC) as a measurement of the maximum amount of parallelism achievable by the algorithm. The maximum sequential path cost is equal to the sum of the computations performed on the longest path from the root to any leaf node. We also include as metrics the maximum computation size in number of dimensions (CD) and maximum message size in number of dimensions (MD). To analyze the relative complexity of a given DCOP instance, we find the minimum induced width (IW) of any traditional pseudotree produced by a heuristic for the original DPOP. 7.1 Generic DCOP instances For our initial tests we randomly generated two sets of problems with 3000 cases in each. Each problem was generated by assigning a random number (picked from a range) of constraints to each variable. The generator then created binary constraints until each variable reached its maximum number of constraints. The first set uses 20 variables, and the best DPOP IW ranges from 1 to 16 with an average of 8.5. The second set uses 100 variables, and the best DPOP IW ranged from 2 to 68 with an average of 39.3. Since most of the problems in the second set were too complex to actually compute the solution, we took measurements of the metrics using the techniques described earlier in Section 5 without actually solving the problem. Results are shown for the first set in Table 1 and for the second set in Table 2. For the two problem sets we split the cases into low density and high density categories. Low density cases consist of those problems that have a best DPOP IW less than or equal to half of the total number of nodes (e.g. IW ≤ 10 for the 20 node problems and IW ≤ 50 for the 100 node problems). High density problems consist of the remainder of the problem sets. In both Table 1 and Table 2 we have listed performance metrics for the original DPOP algorithm, the DCPOP algorithm using only cross-edged pseudotrees (DCPOP-CE), and the DCPOP algorithm using traditional and cross-edged pseudotrees (DCPOP-All). The pseudotrees used for DPOP were generated using 5 heuristics: DFS, DFS MCN, DFS CLIQUE MCN, DFS MCN DSTB, and DFS MCN BEC. These are all versions of the guided DFS traversal discussed in Section 5. The cross-edged pseudotrees used for DCPOP-CE were generated using 5 heuristics: MCN, LCN, MCN A-B, LCN A-B, and LCSG A-B. These are all versions of the best-first traversal discussed in Section 5. For both DPOP and DCPOP-CE we chose the best pseudotree produced by their respective 5 heuristics for each problem in the set. For DCPOP-All we chose the best pseudotree produced by all 10 heuristics for each problem in the set. For the CD and MD metrics the value shown is the average number of dimensions. For the PC metric the value shown is the natural logarithm of the maximum sequential path cost (since the actual value grows exponentially with the complexity of the problem). The final row in both tables is a measurement of improvement of DCPOP-All over DPOP. For the CD and MD metrics the value shown is a reduction in number of dimensions. For the PC metric the value shown is a percentage reduction in the maximum sequential path cost (% = DP OP −DCP OP DCP OP ∗ 100). Notice that DCPOPAll outperforms DPOP on all metrics. This logically follows from our earlier assertion that given the same input, DCPOP performs exactly the same as DPOP. Thus given the choice between the pseudotrees produced by all 10 heuristics, DCPOP-All will always outLow Density High Density Algorithm CD MD PC CD MD PC DPOP 7.81 6.81 3.78 13.34 12.34 5.34 DCPOP-CE 7.94 6.73 3.74 12.83 11.43 5.07 DCPOP-All 7.62 6.49 3.66 12.72 11.36 5.05 Improvement 0.18 0.32 13% 0.62 0.98 36% Table 1: 20 node problems Low Density High Density Algorithm CD MD PC CD MD PC DPOP 33.35 32.35 14.55 58.51 57.50 19.90 DCPOP-CE 33.49 29.17 15.22 57.11 50.03 20.01 DCPOP-All 32.35 29.57 14.10 56.33 51.17 18.84 Improvement 1.00 2.78 104% 2.18 6.33 256% Table 2: 100 node problems Figure 4: Computation Dimension Size Figure 5: Message Dimension Size The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 747 Figure 6: Path Cost DCPOP Improvement Ag Mtg Vars Const IW CD MD PC 10 4 12 13.5 2.25 -0.01 -0.01 5.6% 30 14 44 57.6 3.63 0.09 0.09 10.9% 50 24 76 101.3 4.17 0.08 0.09 10.7% 100 49 156 212.9 5.04 0.16 0.20 30.0% 150 74 236 321.8 5.32 0.21 0.23 35.8% 200 99 316 434.2 5.66 0.18 0.22 29.5% Table 3: Meeting Scheduling Problems perform DPOP. Another trend we notice is that the improvement is greater for high density problems than low density problems. We show this trend in greater detail in Figures 4, 5, and 6. Notice how the improvement increases as the complexity of the problem increases. 7.2 Meeting Scheduling Problem In addition to our initial generic DCOP tests, we ran a series of tests on the Meeting Scheduling Problem (MSP) as described in [6]. The problem setup includes a number of people that are grouped into departments. Each person must attend a specified number of meetings. Meetings can be held within departments or among departments, and can be assigned to one of eight time slots. The MSP maps to a DCOP instance where each variable represents the time slot that a specific person will attend a specific meeting. All variables that belong to the same person have mutual exclusion constraints placed so that the person cannot attend more than one meeting during the same time slot. All variables that belong to the same meeting have equality constraints so that all of the participants choose the same time slot. Unary constraints are placed on each variable to account for a persons valuation of each meeting and time slot. For our tests we generated 100 sample problems for each combination of agents and meetings. Results are shown in Table 3. The values in the first five columns represent (in left to right order), the total number of agents, the total number of meetings, the total number of variables, the average total number of constraints, and the average minimum IW produced by a traditional pseudotree. The last three columns show the same metrics we used for the generic DCOP instances, except this time we only show the improvements of DCPOP-All over DPOP. Performance is better on average for all MSP instances, but again we see larger improvements for more complex problem instances. 8. CONCLUSIONS AND FUTURE WORK We presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements. Our algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase. Our algorithm also allows value assignments to occur at higher level merge points for lower level nodes. We have shown that DCPOP fully extends DPOP by performing the same operations given the same input. We have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees. We placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees. We have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity. Given the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees. Future work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP. 9. REFERENCES [1] J. Liu and K. P. Sycara. Exploiting problem structure for distributed constraint optimization. In V. Lesser, editor, Proceedings of the First International Conference on Multi-Agent Systems, pages 246-254, San Francisco, CA, 1995. MIT Press. [2] P. J. Modi, H. Jung, M. Tambe, W.-M. Shen, and S. Kulkarni. A dynamic distributed constraint satisfaction approach to resource allocation. Lecture Notes in Computer Science, 2239:685-700, 2001. [3] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo. An asynchronous complete method for distributed constraint optimization. In AAMAS 03, 2003. [4] A. Petcu. Frodo: A framework for open/distributed constraint optimization. Technical Report No. 2006/001 2006/001, Swiss Federal Institute of Technology (EPFL), Lausanne (Switzerland), 2006. http://liawww.epfl.ch/frodo/. [5] A. Petcu and B. Faltings. A-dpop: Approximations in distributed optimization. In poster in CP 2005, pages 802-806, Sitges, Spain, October 2005. [6] A. Petcu and B. Faltings. Dpop: A scalable method for multiagent constraint optimization. In IJCAI 05, pages 266-271, Edinburgh, Scotland, Aug 2005. [7] A. Petcu, B. Faltings, and D. Parkes. M-dpop: Faithful distributed implementation of efficient social choice problems. In AAMAS 06, pages 1397-1404, Hakodate, Japan, May 2006. [8] G. Ushakov. Solving meeting scheduling problems using distributed pseudotree-optimization procedure. Masters thesis, ´Ecole Polytechnique F´ed´erale de Lausanne, 2005. [9] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara. Distributed constraint satisfaction for formalizing distributed problem solving. In International Conference on Distributed Computing Systems, pages 614-621, 1992. [10] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara. The distributed constraint satisfaction problem: Formalization and algorithms. Knowledge and Data Engineering, 10(5):673-685, 1998. 748 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
I-56
"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework" (...TRUNCATED)
"This paper presents a novel, unified distributed constraint satisfaction framework based on automat (...TRUNCATED)
["constraint","algorithm","bdi","negoti","distribut constraint satisfact problem","dcsp","share envi (...TRUNCATED)
[ "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "U", "M", "R" ]
"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework Bao Chau Le Dinh and Kiam (...TRUNCATED)
"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework\nABSTRACT\nThis paper pre (...TRUNCATED)
"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework\nABSTRACT\nThis paper pre (...TRUNCATED)
"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework\nABSTRACT\nThis paper pre (...TRUNCATED)
I-52
"A Unified and General Framework for Argumentation-based Negotiation" (...TRUNCATED)
"This paper proposes a unified and general framework for argumentation-based negotiation, in which t (...TRUNCATED)
["framework","argument","argument","negoti","outcom","theori","agent","argument-base negoti","conces (...TRUNCATED)
[ "P", "P", "P", "P", "P", "P", "P", "M", "R", "M", "U", "U", "U" ]
"A Unified and General Framework for Argumentation-based Negotiation Leila Amgoud IRIT - CNRS 118, r (...TRUNCATED)
"A Unified and General Framework for Argumentation-based Negotiation\nABSTRACT\nThis paper proposes (...TRUNCATED)
"A Unified and General Framework for Argumentation-based Negotiation\nABSTRACT\nThis paper proposes (...TRUNCATED)
"A Unified and General Framework for Argumentation-based Negotiation\nABSTRACT\nThis paper proposes (...TRUNCATED)

# Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation

SemEval-2010 is a dataset for benchmarking keyphrase extraction and generation models. The dataset is composed of 244 full-text scientific papers collected from the ACM Digital Library. Keyphrases were annotated by readers and combined with those provided by the authors. Details about the SemEval-2010 dataset can be found in the original paper (kim et al., 2010).

This version of the dataset was produced by (Boudin et al., 2016) and provides four increasingly sophisticated levels of document preprocessing:

• lvl-1: default text files provided by the SemEval-2010 organizers.

• lvl-2: for each file, we manually retrieved the original PDF file from the ACM Digital Library. We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505. We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion. We finally apply a systematic dehyphenation at line breaks.s

• lvl-3: we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion.

• lvl-4: we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique. We keep the title and abstract and select the most content bearing sentences from the remaining contents.

Titles and abstracts, collected from the SciCorefCorpus, are also provided. Details about how they were extracted and cleaned up can be found in (Chaimongkol et al., 2014).

Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition). They are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in (Boudin and Gallina, 2021). Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text. Details about the process can be found in prmu.py. The Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and text (lvl-1).

## Content and statistics

The dataset is divided into the following two splits:

Split # documents #words # keyphrases % Present % Reordered % Mixed % Unseen
Train 144 184.6 15.44 42.16 7.36 26.85 23.63
Test 100 203.1 14.66 40.11 8.34 27.12 24.43

Statistics (#words, PRMU distributions) are computed using the title/abstract and not the full text of scientific papers.

The following data fields are available :

• id: unique identifier of the document.
• title: title of the document.
• abstract: abstract of the document.
• lvl-1: content of the document with no text processing.
• lvl-2: content of the document retrieved from original PDF files and cleaned up.
• lvl-3: content of the document further abridged to relevant sections.
• lvl-4: content of the document further abridged using an unsupervised summarization technique.
• keyphrases: list of reference keyphrases.
• prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases.