id
stringlengths
4
4
title
stringlengths
22
113
abstract
stringlengths
282
2.29k
keyphrases
sequence
prmu
sequence
lvl-1
stringlengths
16.7k
86.2k
lvl-2
stringlengths
9.61k
76k
lvl-3
stringlengths
1.52k
23.8k
lvl-4
stringlengths
1.28k
19.2k
C-58
A Scalable Distributed Information Management System
We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures.
[ "inform manag system", "administr isol", "avail", "distribut hash tabl", "distribut hash tabl", "tunabl spatial replic", "network system monitor", "larg-scale network system", "distribut oper system backbon", "read-domin attribut", "write-domin attribut", "virtual node", "updat-upk-downj strategi", "tempor heterogen", "autonom dht", "aggreg manag layer", "eventu consist", "lazi re-aggreg", "freepastri framework" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "M", "M", "M", "M", "U", "U", "M", "M", "U", "M", "U" ]
A Scalable Distributed Information Management System∗ Praveen Yalagandula ypraveen@cs.utexas.edu Mike Dahlin dahlin@cs.utexas.edu Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 ABSTRACT We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Network Operating Systems, Distributed Databases General Terms Management, Design, Experimentation 1. INTRODUCTION The goal of this research is to design and build a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications. Monitoring, querying, and reacting to changes in the state of a distributed system are core components of applications such as system management [15, 31, 37, 42], service placement [14, 43], data sharing and caching [18, 29, 32, 35, 46], sensor monitoring and control [20, 21], multicast tree formation [8, 9, 33, 36, 38], and naming and request routing [10, 11]. We therefore speculate that a SDIMS in a networked system would provide a distributed operating systems backbone and facilitate the development and deployment of new distributed services. For a large scale information system, hierarchical aggregation is a fundamental abstraction for scalability. Rather than expose all information to all nodes, hierarchical aggregation allows a node to access detailed views of nearby information and summary views of global information. In a SDIMS based on hierarchical aggregation, different nodes can therefore receive different answers to the query find a [nearby] node with at least 1 GB of free memory or find a [nearby] copy of file foo. A hierarchical system that aggregates information through reduction trees [21, 38] allows nodes to access information they care about while maintaining system scalability. To be used as a basic building block, a SDIMS should have four properties. First, the system should be scalable: it should accommodate large numbers of participating nodes, and it should allow applications to install and monitor large numbers of data attributes. Enterprise and global scale systems today might have tens of thousands to millions of nodes and these numbers will increase over time. Similarly, we hope to support many applications, and each application may track several attributes (e.g., the load and free memory of a system``s machines) or millions of attributes (e.g., which files are stored on which machines). Second, the system should have flexibility to accommodate a broad range of applications and attributes. For example, readdominated attributes like numCPUs rarely change in value, while write-dominated attributes like numProcesses change quite often. An approach tuned for read-dominated attributes will consume high bandwidth when applied to write-dominated attributes. Conversely, an approach tuned for write-dominated attributes will suffer from unnecessary query latency or imprecision for read-dominated attributes. Therefore, a SDIMS should provide mechanisms to handle different types of attributes and leave the policy decision of tuning replication to the applications. Third, a SDIMS should provide administrative isolation. In a large system, it is natural to arrange nodes in an organizational or an administrative hierarchy. A SDIMS should support administraSession 10: Distributed Information Systems 379 tive isolation in which queries about an administrative domain``s information can be satisfied within the domain so that the system can operate during disconnections from other domains, so that an external observer cannot monitor or affect intra-domain queries, and to support domain-scoped queries efficiently. Fourth, the system must be robust to node failures and disconnections. A SDIMS should adapt to reconfigurations in a timely fashion and should also provide mechanisms so that applications can tradeoff the cost of adaptation with the consistency level in the aggregated results when reconfigurations occur. We draw inspiration from two previous works: Astrolabe [38] and Distributed Hash Tables (DHTs). Astrolabe [38] is a robust information management system. Astrolabe provides the abstraction of a single logical aggregation tree that mirrors a system``s administrative hierarchy. It provides a general interface for installing new aggregation functions and provides eventual consistency on its data. Astrolabe is robust due to its use of an unstructured gossip protocol for disseminating information and its strategy of replicating all aggregated attribute values for a subtree to all nodes in the subtree. This combination allows any communication pattern to yield eventual consistency and allows any node to answer any query using local information. This high degree of replication, however, may limit the system``s ability to accommodate large numbers of attributes. Also, although the approach works well for read-dominated attributes, an update at one node can eventually affect the state at all nodes, which may limit the system``s flexibility to support write-dominated attributes. Recent research in peer-to-peer structured networks resulted in Distributed Hash Tables (DHTs) [18, 28, 29, 32, 35, 46]-a data structure that scales with the number of nodes and that distributes the read-write load for different queries among the participating nodes. It is interesting to note that although these systems export a global hash table abstraction, many of them internally make use of what can be viewed as a scalable system of aggregation trees to, for example, route a request for a given key to the right DHT node. Indeed, rather than export a general DHT interface, Plaxton et al.``s [28] original application makes use of hierarchical aggregation to allow nodes to locate nearby copies of objects. It seems appealing to develop a SDIMS abstraction that exposes this internal functionality in a general way so that scalable trees for aggregation can be a basic system building block alongside the DHTs. At a first glance, it might appear to be obvious that simply fusing DHTs with Astrolabe``s aggregation abstraction will result in a SDIMS. However, meeting the SDIMS requirements forces a design to address four questions: (1) How to scalably map different attributes to different aggregation trees in a DHT mesh? (2) How to provide flexibility in the aggregation to accommodate different application requirements? (3) How to adapt a global, flat DHT mesh to attain administrative isolation property? and (4) How to provide robustness without unstructured gossip and total replication? The key contributions of this paper that form the foundation of our SDIMS design are as follows. 1. We define a new aggregation abstraction that specifies both attribute type and attribute name and that associates an aggregation function with a particular attribute type. This abstraction paves the way for utilizing the DHT system``s internal trees for aggregation and for achieving scalability with both nodes and attributes. 2. We provide a flexible API that lets applications control the propagation of reads and writes and thus trade off update cost, read latency, replication, and staleness. 3. We augment an existing DHT algorithm to ensure path convergence and path locality properties in order to achieve administrative isolation. 4. We provide robustness to node and network reconfigurations by (a) providing temporal replication through lazy reaggregation that guarantees eventual consistency and (b) ensuring that our flexible API allows demanding applications gain additional robustness by using tunable spatial replication of data aggregates or by performing fast on-demand reaggregation to augment the underlying lazy reaggregation or by doing both. We have built a prototype of SDIMS. Through simulations and micro-benchmark experiments on a number of department machines and PlanetLab [27] nodes, we observe that the prototype achieves scalability with respect to both nodes and attributes through use of its flexible API, inflicts an order of magnitude lower maximum node stress than unstructured gossiping schemes, achieves isolation properties at a cost of modestly increased read latency compared to flat DHTs, and gracefully handles node failures. This initial study discusses key aspects of an ongoing system building effort, but it does not address all issues in building a SDIMS. For example, we believe that our strategies for providing robustness will mesh well with techniques such as supernodes [22] and other ongoing efforts to improve DHTs [30] for further improving robustness. Also, although splitting aggregation among many trees improves scalability for simple queries, this approach may make complex and multi-attribute queries more expensive compared to a single tree. Additional work is needed to understand the significance of this limitation for real workloads and, if necessary, to adapt query planning techniques from DHT abstractions [16, 19] to scalable aggregation tree abstractions. In Section 2, we explain the hierarchical aggregation abstraction that SDIMS provides to applications. In Sections 3 and 4, we describe the design of our system for achieving the flexibility, scalability, and administrative isolation requirements of a SDIMS. In Section 5, we detail the implementation of our prototype system. Section 6 addresses the issue of adaptation to the topological reconfigurations. In Section 7, we present the evaluation of our system through large-scale simulations and microbenchmarks on real networks. Section 8 details the related work, and Section 9 summarizes our contribution. 2. AGGREGATION ABSTRACTION Aggregation is a natural abstraction for a large-scale distributed information system because aggregation provides scalability by allowing a node to view detailed information about the state near it and progressively coarser-grained summaries about progressively larger subsets of a system``s data [38]. Our aggregation abstraction is defined across a tree spanning all nodes in the system. Each physical node in the system is a leaf and each subtree represents a logical group of nodes. Note that logical groups can correspond to administrative domains (e.g., department or university) or groups of nodes within a domain (e.g., 10 workstations on a LAN in CS department). An internal non-leaf node, which we call virtual node, is simulated by one or more physical nodes at the leaves of the subtree for which the virtual node is the root. We describe how to form such trees in a later section. Each physical node has local data stored as a set of (attributeType, attributeName, value) tuples such as (configuration, numCPUs, 16), (mcast membership, session foo, yes), or (file stored, foo, myIPaddress). The system associates an aggregation function ftype with each attribute type, and for each level-i subtree Ti in the system, the system defines an aggregate value Vi,type,name for each (at380 tributeType, attributeName) pair as follows. For a (physical) leaf node T0 at level 0, V0,type,name is the locally stored value for the attribute type and name or NULL if no matching tuple exists. Then the aggregate value for a level-i subtree Ti is the aggregation function for the type, ftype computed across the aggregate values of each of Ti``s k children: Vi,type,name = ftype(V0 i−1,type,name,V1 i−1,type,name,...,Vk−1 i−1,type,name). Although SDIMS allows arbitrary aggregation functions, it is often desirable that these functions satisfy the hierarchical computation property [21]: f(v1,...,vn)= f(f(v1,...,vs1 ), f(vs1+1,...,vs2 ), ..., f(vsk+1,...,vn)), where vi is the value of an attribute at node i. For example, the average operation, defined as avg(v1,...,vn) = 1/n. ∑n i=0 vi, does not satisfy the property. Instead, if an attribute stores values as tuples (sum,count), the attribute satisfies the hierarchical computation property while still allowing the applications to compute the average from the aggregate sum and count values. Finally, note that for a large-scale system, it is difficult or impossible to insist that the aggregation value returned by a probe corresponds to the function computed over the current values at the leaves at the instant of the probe. Therefore our system provides only weak consistency guarantees - specifically eventual consistency as defined in [38]. 3. FLEXIBILITY A major innovation of our work is enabling flexible aggregate computation and propagation. The definition of the aggregation abstraction allows considerable flexibility in how, when, and where aggregate values are computed and propagated. While previous systems [15, 29, 38, 32, 35, 46] implement a single static strategy, we argue that a SDIMS should provide flexible computation and propagation to efficiently support wide variety of applications with diverse requirements. In order to provide this flexibility, we develop a simple interface that decomposes the aggregation abstraction into three pieces of functionality: install, update, and probe. This definition of the aggregation abstraction allows our system to provide a continuous spectrum of strategies ranging from lazy aggregate computation and propagation on reads to aggressive immediate computation and propagation on writes. In Figure 1, we illustrate both extreme strategies and an intermediate strategy. Under the lazy Update-Local computation and propagation strategy, an update (or write) only affects local state. Then, a probe (or read) that reads a level-i aggregate value is sent up the tree to the issuing node``s level-i ancestor and then down the tree to the leaves. The system then computes the desired aggregate value at each layer up the tree until the level-i ancestor that holds the desired value. Finally, the level-i ancestor sends the result down the tree to the issuing node. In the other extreme case of the aggressive Update-All immediate computation and propagation on writes [38], when an update occurs, changes are aggregated up the tree, and each new aggregate value is flooded to all of a node``s descendants. In this case, each level-i node not only maintains the aggregate values for the level-i subtree but also receives and locally stores copies of all of its ancestors'' level- j ( j > i) aggregation values. Also, a leaf satisfies a probe for a level-i aggregate using purely local data. In an intermediate Update-Up strategy, the root of each subtree maintains the subtree``s current aggregate value, and when an update occurs, the leaf node updates its local state and passes the update to its parent, and then each successive enclosing subtree updates its aggregate value and passes the new value to its parent. This strategy satisfies a leaf``s probe for a level-i aggregate value by sending the probe up to the level-i ancestor of the leaf and then sending the aggregate value down to the leaf. Finally, notice that other strategies exist. In general, an Update-Upk-Downj strategy aggregates up to parameter description optional attrType Attribute Type aggrfunc Aggregation Function up How far upward each update is sent (default: all) X down How far downward each aggregate is sent (default: none) X domain Domain restriction (default: none) X expTime Expiry Time Table 1: Arguments for the install operation the kth level and propagates the aggregate values of a node at level l (s.t. l ≤ k) downward for j levels. A SDIMS must provide a wide range of flexible computation and propagation strategies to applications for it to be a general abstraction. An application should be able to choose a particular mechanism based on its read-to-write ratio that reduces the bandwidth consumption while attaining the required responsiveness and precision. Note that the read-to-write ratio of the attributes that applications install vary extensively. For example, a read-dominated attribute like numCPUs rarely changes in value, while a writedominated attribute like numProcesses changes quite often. An aggregation strategy like Update-All works well for read-dominated attributes but suffers high bandwidth consumption when applied for write-dominated attributes. Conversely, an approach like UpdateLocal works well for write-dominated attributes but suffers from unnecessary query latency or imprecision for read-dominated attributes. SDIMS also allows non-uniform computation and propagation across the aggregation tree with different up and down parameters in different subtrees so that applications can adapt with the spatial and temporal heterogeneity of read and write operations. With respect to spatial heterogeneity, access patterns may differ for different parts of the tree, requiring different propagation strategies for different parts of the tree. Similarly with respect to temporal heterogeneity, access patterns may change over time requiring different strategies over time. 3.1 Aggregation API We provide the flexibility described above by splitting the aggregation API into three functions: Install() installs an aggregation function that defines an operation on an attribute type and specifies the update strategy that the function will use, Update() inserts or modifies a node``s local value for an attribute, and Probe() obtains an aggregate value for a specified subtree. The install interface allows applications to specify the k and j parameters of the Update-Upk-Downj strategy along with the aggregation function. The update interface invokes the aggregation of an attribute on the tree according to corresponding aggregation function``s aggregation strategy. The probe interface not only allows applications to obtain the aggregated value for a specified tree but also allows a probing node to continuously fetch the values for a specified time, thus enabling an application to adapt to spatial and temporal heterogeneity. The rest of the section describes these three interfaces in detail. 3.1.1 Install The Install operation installs an aggregation function in the system. The arguments for this operation are listed in Table 1. The attrType argument denotes the type of attributes on which this aggregation function is invoked. Installed functions are soft state that must be periodically renewed or they will be garbage collected at expTime. The arguments up and down specify the aggregate computation 381 Update Strategy On Update On Probe for Global Aggregate Value On Probe for Level-1 Aggregate Value Update-Local Update-Up Update-All Figure 1: Flexible API parameter description optional attrType Attribute Type attrName Attribute Name mode Continuous or One-shot (default: one-shot) X level Level at which aggregate is sought (default: at all levels) X up How far up to go and re-fetch the value (default: none) X down How far down to go and reaggregate (default: none) X expTime Expiry Time Table 2: Arguments for the probe operation and propagation strategy Update-Upk-Downj. The domain argument, if present, indicates that the aggregation function should be installed on all nodes in the specified domain; otherwise the function is installed on all nodes in the system. 3.1.2 Update The Update operation takes three arguments attrType, attrName, and value and creates a new (attrType, attrName, value) tuple or updates the value of an old tuple with matching attrType and attrName at a leaf node. The update interface meshes with installed aggregate computation and propagation strategy to provide flexibility. In particular, as outlined above and described in detail in Section 5, after a leaf applies an update locally, the update may trigger re-computation of aggregate values up the tree and may also trigger propagation of changed aggregate values down the tree. Notice that our abstraction associates an aggregation function with only an attrType but lets updates specify an attrName along with the attrType. This technique helps achieve scalability with respect to nodes and attributes as described in Section 4. 3.1.3 Probe The Probe operation returns the value of an attribute to an application. The complete argument set for the probe operation is shown in Table 2. Along with the attrName and the attrType arguments, a level argument specifies the level at which the answers are required for an attribute. In our implementation we choose to return results at all levels k < l for a level-l probe because (i) it is inexpensive as the nodes traversed for level-l probe also contain level k aggregates for k < l and as we expect the network cost of transmitting the additional information to be small for the small aggregates which we focus and (ii) it is useful as applications can efficiently get several aggregates with a single probe (e.g., for domain-scoped queries as explained in Section 4.2). Probes with mode set to continuous and with finite expTime enable applications to handle spatial and temporal heterogeneity. When node A issues a continuous probe at level l for an attribute, then regardless of the up and down parameters, updates for the attribute at any node in A``s level-l ancestor``s subtree are aggregated up to level l and the aggregated value is propagated down along the path from the ancestor to A. Note that continuous mode enables SDIMS to support a distributed sensor-actuator mechanism where a sensor monitors a level-i aggregate with a continuous mode probe and triggers an actuator upon receiving new values for the probe. The up and down arguments enable applications to perform ondemand fast re-aggregation during reconfigurations, where a forced re-aggregation is done for the corresponding levels even if the aggregated value is available, as we discuss in Section 6. When present, the up and down arguments are interpreted as described in the install operation. 3.1.4 Dynamic Adaptation At the API level, the up and down arguments in install API can be regarded as hints, since they suggest a computation strategy but do not affect the semantics of an aggregation function. A SDIMS implementation can dynamically adjust its up/down strategies for an attribute based on its measured read/write frequency. But a virtual intermediate node needs to know the current up and down propagation values to decide if the local aggregate is fresh in order to answer a probe. This is the key reason why up and down need to be statically defined at the install time and can not be specified in the update operation. In dynamic adaptation, we implement a leasebased mechanism where a node issues a lease to a parent or a child denoting that it will keep propagating the updates to that parent or child. We are currently evaluating different policies to decide when to issue a lease and when to revoke a lease. 4. SCALABILITY Our design achieves scalability with respect to both nodes and attributes through two key ideas. First, it carefully defines the aggregation abstraction to mesh well with its underlying scalable DHT system. Second, it refines the basic DHT abstraction to form an Autonomous DHT (ADHT) to achieve the administrative isolation properties that are crucial to scaling for large real-world systems. In this section, we describe these two ideas in detail. 4.1 Leveraging DHTs In contrast to previous systems [4, 15, 38, 39, 45], SDIMS``s aggregation abstraction specifies both an attribute type and attribute name and associates an aggregation function with a type rather than just specifying and associating a function with a name. Installing a single function that can operate on many different named attributes matching a type improves scalability for sparse attribute types with large, sparsely-filled name spaces. For example, to construct a file location service, our interface allows us to install a single function that computes an aggregate value for any named file. A subtree``s aggregate value for (FILELOC, name) would be the ID of a node in the subtree that stores the named file. Conversely, Astrolabe copes with sparse attributes by having aggregation functions compute sets or lists and suggests that scalability can be improved by representing such sets with Bloom filters [6]. Supporting sparse names within a type provides at least two advantages. First, when the value associated with a name is updated, only the state associ382 001 010100 000 011 101 111 110 011 111 001 101 000 100 110010 L0 L1 L2 L3 Figure 2: The DHT tree corresponding to key 111 (DHTtree111) and the corresponding aggregation tree. ated with that name needs to be updated and propagated to other nodes. Second, splitting values associated with different names into different aggregation values allows our system to leverage Distributed Hash Tables (DHTs) to map different names to different trees and thereby spread the function``s logical root node``s load and state across multiple physical nodes. Given this abstraction, scalably mapping attributes to DHTs is straightforward. DHT systems assign a long, random ID to each node and define an algorithm to route a request for key k to a node rootk such that the union of paths from all nodes forms a tree DHTtreek rooted at the node rootk. Now, as illustrated in Figure 2, by aggregating an attribute along the aggregation tree corresponding to DHTtreek for k =hash(attribute type, attribute name), different attributes will be aggregated along different trees. In comparison to a scheme where all attributes are aggregated along a single tree, aggregating along multiple trees incurs lower maximum node stress: whereas in a single aggregation tree approach, the root and the intermediate nodes pass around more messages than leaf nodes, in a DHT-based multi-tree, each node acts as an intermediate aggregation point for some attributes and as a leaf node for other attributes. Hence, this approach distributes the onus of aggregation across all nodes. 4.2 Administrative Isolation Aggregation trees should provide administrative isolation by ensuring that for each domain, the virtual node at the root of the smallest aggregation subtree containing all nodes of that domain is hosted by a node in that domain. Administrative isolation is important for three reasons: (i) for security - so that updates and probes flowing in a domain are not accessible outside the domain, (ii) for availability - so that queries for values in a domain are not affected by failures of nodes in other domains, and (iii) for efficiency - so that domain-scoped queries can be simple and efficient. To provide administrative isolation to aggregation trees, a DHT should satisfy two properties: 1. Path Locality: Search paths should always be contained in the smallest possible domain. 2. Path Convergence: Search paths for a key from different nodes in a domain should converge at a node in that domain. Existing DHTs support path locality [18] or can easily support it by using the domain nearness as the distance metric [7, 17], but they do not guarantee path convergence as those systems try to optimize the search path to the root to reduce response latency. For example, Pastry [32] uses prefix routing in which each node``s routing table contains one row per hexadecimal digit in the nodeId space where the ith row contains a list of nodes whose nodeIds differ from the current node``s nodeId in the ith digit with one entry for each possible digit value. Given a routing topology, to route a packet to an arbitrary destination key, a node in Pastry forwards a packet to the node with a nodeId prefix matching the key in at least one more digit than the current node. If such a node is not known, the current node uses an additional data structure, the leaf set containing 110XX 010XX 011XX 100XX 101XX univ dep1 dep2 key = 111XX 011XX 100XX 101XX 110XX 010XX L1 L0 L2 Figure 3: Example shows how isolation property is violated with original Pastry. We also show the corresponding aggregation tree. 110XX 010XX 011XX 100XX 101XX univ dep1 dep2 key = 111XX X 011XX 100XX 101XX 110XX 010XX L0 L1 L2 Figure 4: Autonomous DHT satisfying the isolation property. Also the corresponding aggregation tree is shown. L immediate higher and lower neighbors in the nodeId space, and forwards the packet to a node with an identical prefix but that is numerically closer to the destination key in the nodeId space. This process continues until the destination node appears in the leaf set, after which the message is routed directly. Pastry``s expected number of routing steps is logn, where n is the number of nodes, but as Figure 3 illustrates, this algorithm does not guarantee path convergence: if two nodes in a domain have nodeIds that match a key in the same number of bits, both of them can route to a third node outside the domain when routing for that key. Simple modifications to Pastry``s route table construction and key-routing protocols yield an Autonomous DHT (ADHT) that satisfies the path locality and path convergence properties. As Figure 4 illustrates, whenever two nodes in a domain share the same prefix with respect to a key and no other node in the domain has a longer prefix, our algorithm introduces a virtual node at the boundary of the domain corresponding to that prefix plus the next digit of the key; such a virtual node is simulated by the existing node whose id is numerically closest to the virtual node``s id. Our ADHT``s routing table differs from Pastry``s in two ways. First, each node maintains a separate leaf set for each domain of which it is a part. Second, nodes use two proximity metrics when populating the routing tables - hierarchical domain proximity is the primary metric and network distance is secondary. Then, to route a packet to a global root for a key, ADHT routing algorithm uses the routing table and the leaf set entries to route to each successive enclosing domain``s root (the virtual or real node in the domain matching the key in the maximum number of digits). Additional details about the ADHT algorithm are available in an extended technical report [44]. Properties. Maintaining a different leaf set for each administrative hierarchy level increases the number of neighbors that each node tracks to (2b)∗lgb n+c.l from (2b)∗lgb n+c in unmodified Pastry, where b is the number of bits in a digit, n is the number of nodes, c is the leaf set size, and l is the number of domain levels. Routing requires O(lgbn + l) steps compared to O(lgbn) steps in Pastry; also, each routing hop may be longer than in Pastry because the modified algorithm``s routing table prefers same-domain nodes over nearby nodes. We experimentally quantify the additional routing costs in Section 7. In a large system, the ADHT topology allows domains to im383 A1 A2 B1 ((B1.B.,1), (B.,1),(. ,1)) ((B1.B.,1), (B.,1),(. ,1)) L2 L1 L0 ((B1.B.,1), (B.,1),(. ,3)) ((A1.A.,1), (A.,2),(. ,2)) ((A1.A.,1), (A.,1),(. ,1)) ((A2.A.,1), (A.,1),(. ,1)) Figure 5: Example for domain-scoped queries prove security for sensitive attribute types by installing them only within a specified domain. Then, aggregation occurs entirely within the domain and a node external to the domain can neither observe nor affect the updates and aggregation computations of the attribute type. Furthermore, though we have not implemented this feature in the prototype, the ADHT topology would also support domainrestricted probes that could ensure that no one outside of a domain can observe a probe for data stored within the domain. The ADHT topology also enhances availability by allowing the common case of probes for data within a domain to depend only on a domain``s nodes. This, for example, allows a domain that becomes disconnected from the rest of the Internet to continue to answer queries for local data. Aggregation trees that provide administrative isolation also enable the definition of simple and efficient domain-scoped aggregation functions to support queries like what is the average load on machines in domain X? For example, consider an aggregation function to count the number of machines in an example system with three machines illustrated in Figure 5. Each leaf node l updates attribute NumMachines with a value vl containing a set of tuples of form (Domain, Count) for each domain of which the node is a part. In the example, the node A1 with name A1.A. performs an update with the value ((A1.A.,1),(A.,1),(. ,1)). An aggregation function at an internal virtual node hosted on node N with child set C computes the aggregate as a set of tuples: for each domain D that N is part of, form a tuple (D,∑c∈C(count|(D,count) ∈ vc)). This computation is illustrated in the Figure 5. Now a query for NumMachines with level set to MAX will return the aggregate values at each intermediate virtual node on the path to the root as a set of tuples (tree level, aggregated value) from which it is easy to extract the count of machines at each enclosing domain. For example, A1 would receive ((2, ((B1.B.,1),(B.,1),(. ,3))), (1, ((A1.A.,1),(A.,2),(. ,2))), (0, ((A1.A.,1),(A.,1),(. ,1)))). Note that supporting domain-scoped queries would be less convenient and less efficient if aggregation trees did not conform to the system``s administrative structure. It would be less efficient because each intermediate virtual node will have to maintain a list of all values at the leaves in its subtree along with their names and it would be less convenient as applications that need an aggregate for a domain will have to pick values of nodes in that domain from the list returned by a probe and perform computation. 5. PROTOTYPE IMPLEMENTATION The internal design of our SDIMS prototype comprises of two layers: the Autonomous DHT (ADHT) layer manages the overlay topology of the system and the Aggregation Management Layer (AML) maintains attribute tuples, performs aggregations, stores and propagates aggregate values. Given the ADHT construction described in Section 4.2, each node implements an Aggregation Management Layer (AML) to support the flexible API described in Section 3. In this section, we describe the internal state and operation of the AML layer of a node in the system. local MIB MIBs ancestor reduction MIB (level 1)MIBs ancestor MIB from child 0X... MIB from child 0X... Level 2 Level 1 Level 3 Level 0 1XXX... 10XX... 100X... From parents0X. . To parent 0X... −− aggregation functions From parents To parent 10XX... 1X. . 1X. . 1X. . To parent 11XX... Node Id: (1001XXX) 1001X. . 100X. . 10X. . 1X. . Virtual Node Figure 6: Example illustrating the data structures and the organization of them at a node. We refer to a store of (attribute type, attribute name, value) tuples as a Management Information Base or MIB, following the terminology from Astrolabe [38] and SNMP [34]. We refer an (attribute type, attribute name) tuple as an attribute key. As Figure 6 illustrates, each physical node in the system acts as several virtual nodes in the AML: a node acts as leaf for all attribute keys, as a level-1 subtree root for keys whose hash matches the node``s ID in b prefix bits (where b is the number of bits corrected in each step of the ADHT``s routing scheme), as a level-i subtree root for attribute keys whose hash matches the node``s ID in the initial i ∗ b bits, and as the system``s global root for attribute keys whose hash matches the node``s ID in more prefix bits than any other node (in case of a tie, the first non-matching bit is ignored and the comparison is continued [46]). To support hierarchical aggregation, each virtual node at the root of a level-i subtree maintains several MIBs that store (1) child MIBs containing raw aggregate values gathered from children, (2) a reduction MIB containing locally aggregated values across this raw information, and (3) an ancestor MIB containing aggregate values scattered down from ancestors. This basic strategy of maintaining child, reduction, and ancestor MIBs is based on Astrolabe [38], but our structured propagation strategy channels information that flows up according to its attribute key and our flexible propagation strategy only sends child updates up and ancestor aggregate results down as far as specified by the attribute key``s aggregation function. Note that in the discussion below, for ease of explanation, we assume that the routing protocol is correcting single bit at a time (b = 1). Our system, built upon Pastry, handles multi-bit correction (b = 4) and is a simple extension to the scheme described here. For a given virtual node ni at level i, each child MIB contains the subset of a child``s reduction MIB that contains tuples that match ni``s node ID in i bits and whose up aggregation function attribute is at least i. These local copies make it easy for a node to recompute a level-i aggregate value when one child``s input changes. Nodes maintain their child MIBs in stable storage and use a simplified version of the Bayou log exchange protocol (sans conflict detection and resolution) for synchronization after disconnections [26]. Virtual node ni at level i maintains a reduction MIB of tuples with a tuple for each key present in any child MIB containing the attribute type, attribute name, and output of the attribute type``s aggregate functions applied to the children``s tuples. A virtual node ni at level i also maintains an ancestor MIB to store the tuples containing attribute key and a list of aggregate values at different levels scattered down from ancestors. Note that the 384 list for a key might contain multiple aggregate values for a same level but aggregated at different nodes (see Figure 4). So, the aggregate values are tagged not only with level information, but are also tagged with ID of the node that performed the aggregation. Level-0 differs slightly from other levels. Each level-0 leaf node maintains a local MIB rather than maintaining child MIBs and a reduction MIB. This local MIB stores information about the local node``s state inserted by local applications via update() calls. We envision various sensor programs and applications insert data into local MIB. For example, one program might monitor local configuration and perform updates with information such as total memory, free memory, etc., A distributed file system might perform update for each file stored on the local node. Along with these MIBs, a virtual node maintains two other tables: an aggregation function table and an outstanding probes table. An aggregation function table contains the aggregation function and installation arguments (see Table 1) associated with an attribute type or an attribute type and name. Each aggregate function is installed on all nodes in a domain``s subtree, so the aggregate function table can be thought of as a special case of the ancestor MIB with domain functions always installed up to a root within a specified domain and down to all nodes within the domain. The outstanding probes table maintains temporary information regarding in-progress probes. Given these data structures, it is simple to support the three API functions described in Section 3.1. Install The Install operation (see Table 1) installs on a domain an aggregation function that acts on a specified attribute type. Execution of an install operation for function aggrFunc on attribute type attrType proceeds in two phases: first the install request is passed up the ADHT tree with the attribute key (attrType, null) until it reaches the root for that key within the specified domain. Then, the request is flooded down the tree and installed on all intermediate and leaf nodes. Update When a level i virtual node receives an update for an attribute from a child below: it first recomputes the level-i aggregate value for the specified key, stores that value in its reduction MIB and then, subject to the function``s up and domain parameters, passes the updated value to the appropriate parent based on the attribute key. Also, the level-i (i ≥ 1) virtual node sends the updated level-i aggregate to all its children if the function``s down parameter exceeds zero. Upon receipt of a level-i aggregate from a parent, a level k virtual node stores the value in its ancestor MIB and, if k ≥ i−down, forwards this aggregate to its children. Probe A Probe collects and returns the aggregate value for a specified attribute key for a specified level of the tree. As Figure 1 illustrates, the system satisfies a probe for a level-i aggregate value using a four-phase protocol that may be short-circuited when updates have previously propagated either results or partial results up or down the tree. In phase 1, the route probe phase, the system routes the probe up the attribute key``s tree to either the root of the level-i subtree or to a node that stores the requested value in its ancestor MIB. In the former case, the system proceeds to phase 2 and in the latter it skips to phase 4. In phase 2, the probe scatter phase, each node that receives a probe request sends it to all of its children unless the node``s reduction MIB already has a value that matches the probe``s attribute key, in which case the node initiates phase 3 on behalf of its subtree. In phase 3, the probe aggregation phase, when a node receives values for the specified key from each of its children, it executes the aggregate function on these values and either (a) forwards the result to its parent (if its level is less than i) or (b) initiates phase 4 (if it is at level i). Finally, in phase 4, the aggregate routing phase the aggregate value is routed down to the node that requested it. Note that in the extreme case of a function installed with up = down = 0, a level-i probe can touch all nodes in a level-i subtree while in the opposite extreme case of a function installed with up = down = ALL, probe is a completely local operation at a leaf. For probes that include phases 2 (probe scatter) and 3 (probe aggregation), an issue is how to decide when a node should stop waiting for its children to respond and send up its current aggregate value. A node stops waiting for its children when one of three conditions occurs: (1) all children have responded, (2) the ADHT layer signals one or more reconfiguration events that mark all children that have not yet responded as unreachable, or (3) a watchdog timer for the request fires. The last case accounts for nodes that participate in the ADHT protocol but that fail at the AML level. At a virtual node, continuous probes are handled similarly as one-shot probes except that such probes are stored in the outstanding probe table for a time period of expTime specified in the probe. Thus each update for an attribute triggers re-evaluation of continuous probes for that attribute. We implement a lease-based mechanism for dynamic adaptation. A level-l virtual node for an attribute can issue the lease for levell aggregate to a parent or a child only if up is greater than l or it has leases from all its children. A virtual node at level l can issue the lease for level-k aggregate for k > l to a child only if down≥ k −l or if it has the lease for that aggregate from its parent. Now a probe for level-k aggregate can be answered by level-l virtual node if it has a valid lease, irrespective of the up and down values. We are currently designing different policies to decide when to issue a lease and when to revoke a lease and are also evaluating them with the above mechanism. Our current prototype does not implement access control on install, update, and probe operations but we plan to implement Astrolabe``s [38] certificate-based restrictions. Also our current prototype does not restrict the resource consumption in executing the aggregation functions; but, `techniques from research on resource management in server systems and operating systems [2, 3] can be applied here. 6. ROBUSTNESS In large scale systems, reconfigurations are common. Our two main principles for robustness are to guarantee (i) read availability - probes complete in finite time, and (ii) eventual consistency - updates by a live node will be visible to probes by connected nodes in finite time. During reconfigurations, a probe might return a stale value for two reasons. First, reconfigurations lead to incorrectness in the previous aggregate values. Second, the nodes needed for aggregation to answer the probe become unreachable. Our system also provides two hooks that applications can use for improved end-to-end robustness in the presence of reconfigurations: (1) Ondemand re-aggregation and (2) application controlled replication. Our system handles reconfigurations at two levels - adaptation at the ADHT layer to ensure connectivity and adaptation at the AML layer to ensure access to the data in SDIMS. 6.1 ADHT Adaptation Our ADHT layer adaptation algorithm is same as Pastry``s adaptation algorithm [32] - the leaf sets are repaired as soon as a reconfiguration is detected and the routing table is repaired lazily. Note that maintaining extra leaf sets does not degrade the fault-tolerance property of the original Pastry; indeed, it enhances the resilience of ADHTs to failures by providing additional routing links. Due to redundancy in the leaf sets and the routing table, updates can be routed towards their root nodes successfully even during failures. 385 Reconfig reconfig notices DHT partial DHT complete DHT ends Lazy Time Data 3 7 81 2 4 5 6starts Lazy Data starts Lazy Data starts Lazy Data repairrepair reaggr reaggr reaggr reaggr happens Figure 7: Default lazy data re-aggregation time line Also note that the administrative isolation property satisfied by our ADHT algorithm ensures that the reconfigurations in a level i domain do not affect the probes for level i in a sibling domain. 6.2 AML Adaptation Broadly, we use two types of strategies for AML adaptation in the face of reconfigurations: (1) Replication in time as a fundamental baseline strategy, and (2) Replication in space as an additional performance optimization that falls back on replication in time when the system runs out of replicas. We provide two mechanisms for replication in time. First, lazy re-aggregation propagates already received updates to new children or new parents in a lazy fashion over time. Second, applications can reduce the probability of probe response staleness during such repairs through our flexible API with appropriate setting of the down parameter. Lazy Re-aggregation: The DHT layer informs the AML layer about reconfigurations in the network using the following three function calls - newParent, failedChild, and newChild. On newParent(parent, prefix), all probes in the outstanding-probes table corresponding to prefix are re-evaluated. If parent is not null, then aggregation functions and already existing data are lazily transferred in the background. Any new updates, installs, and probes for this prefix are sent to the parent immediately. On failedChild(child, prefix), the AML layer marks the child as inactive and any outstanding probes that are waiting for data from this child are re-evaluated. On newChild(child, prefix), the AML layer creates space in its data structures for this child. Figure 7 shows the time line for the default lazy re-aggregation upon reconfiguration. Probes initiated between points 1 and 2 and that are affected by reconfigurations are reevaluated by AML upon detecting the reconfiguration. Probes that complete or start between points 2 and 8 may return stale answers. On-demand Re-aggregation: The default lazy aggregation scheme lazily propagates the old updates in the system. Additionally, using up and down knobs in the Probe API, applications can force on-demand fast re-aggregation of updates to avoid staleness in the face of reconfigurations. In particular, if an application detects or suspects an answer as stale, then it can re-issue the probe increasing the up and down parameters to force the refreshing of the cached data. Note that this strategy will be useful only after the DHT adaptation is completed (Point 6 on the time line in Figure 7). Replication in Space: Replication in space is more challenging in our system than in a DHT file location application because replication in space can be achieved easily in the latter by just replicating the root node``s contents. In our system, however, all internal nodes have to be replicated along with the root. In our system, applications control replication in space using up and down knobs in the Install API; with large up and down values, aggregates at the intermediate virtual nodes are propagated to more nodes in the system. By reducing the number of nodes that have to be accessed to answer a probe, applications can reduce the probability of incorrect results occurring due to the failure of nodes that do not contribute to the aggregate. For example, in a file location application, using a non-zero positive down parameter ensures that a file``s global aggregate is replicated on nodes other than the root. 0.1 1 10 100 1000 10000 0.0001 0.01 1 100 10000 Avg.numberofmessagesperoperation Read to Write ratio Update-All Up=ALL, Down=9 Up=ALL, Down=6 Update-Up Update-Local Up=2, Down=0 Up=5, Down=0 Figure 8: Flexibility of our approach. With different UP and DOWN values in a network of 4096 nodes for different readwrite ratios. Probes for the file location can then be answered without accessing the root; hence they are not affected by the failure of the root. However, note that this technique is not appropriate in some cases. An aggregated value in file location system is valid as long as the node hosting the file is active, irrespective of the status of other nodes in the system; whereas an application that counts the number of machines in a system may receive incorrect results irrespective of the replication. If reconfigurations are only transient (like a node temporarily not responding due to a burst of load), the replicated aggregate closely or correctly resembles the current state. 7. EVALUATION We have implemented a prototype of SDIMS in Java using the FreePastry framework [32] and performed large-scale simulation experiments and micro-benchmark experiments on two real networks: 187 machines in the department and 69 machines on the PlanetLab [27] testbed. In all experiments, we use static up and down values and turn off dynamic adaptation. Our evaluation supports four main conclusions. First, flexible API provides different propagation strategies that minimize communication resources at different read-to-write ratios. For example, in our simulation we observe Update-Local to be efficient for read-to-write ratios below 0.0001, Update-Up around 1, and Update-All above 50000. Second, our system is scalable with respect to both nodes and attributes. In particular, we find that the maximum node stress in our system is an order lower than observed with an Update-All, gossiping approach. Third, in contrast to unmodified Pastry which violates path convergence property in upto 14% cases, our system conforms to the property. Fourth, the system is robust to reconfigurations and adapts to failures with in a few seconds. 7.1 Simulation Experiments Flexibility and Scalability: A major innovation of our system is its ability to provide flexible computation and propagation of aggregates. In Figure 8, we demonstrate the flexibility exposed by the aggregation API explained in Section 3. We simulate a system with 4096 nodes arranged in a domain hierarchy with branching factor (bf) of 16 and install several attributes with different up and down parameters. We plot the average number of messages per operation incurred for a wide range of read-to-write ratios of the operations for different attributes. Simulations with other sizes of networks with different branching factors reveal similar results. This graph clearly demonstrates the benefit of supporting a wide range of computation and propagation strategies. Although having a small UP 386 1 10 100 1000 10000 100000 1e+06 1e+07 1 10 100 1000 10000 100000 MaximumNodeStress Number of attributes installed Gossip 256 Gossip 4096 Gossip 65536 DHT 256 DHT 4096 DHT 65536 Figure 9: Max node stress for a gossiping approach vs. ADHT based approach for different number of nodes with increasing number of sparse attributes. value is efficient for attributes with low read-to-write ratios (write dominated applications), the probe latency, when reads do occur, may be high since the probe needs to aggregate the data from all the nodes that did not send their aggregate up. Conversely, applications that wish to improve probe overheads or latencies can increase their UP and DOWN propagation at a potential cost of increase in write overheads. Compared to an existing Update-all single aggregation tree approach [38], scalability in SDIMS comes from (1) leveraging DHTs to form multiple aggregation trees that split the load across nodes and (2) flexible propagation that avoids propagation of all updates to all nodes. Figure 9 demonstrates the SDIMS``s scalability with nodes and attributes. For this experiment, we build a simulator to simulate both Astrolabe [38] (a gossiping, Update-All approach) and our system for an increasing number of sparse attributes. Each attribute corresponds to the membership in a multicast session with a small number of participants. For this experiment, the session size is set to 8, the branching factor is set to 16, the propagation mode for SDIMS is Update-Up, and the participant nodes perform continuous probes for the global aggregate value. We plot the maximum node stress (in terms of messages) observed in both schemes for different sized networks with increasing number of sessions when the participant of each session performs an update operation. Clearly, the DHT based scheme is more scalable with respect to attributes than an Update-all gossiping scheme. Observe that at some constant number of attributes, as the number of nodes increase in the system, the maximum node stress increases in the gossiping approach, while it decreases in our approach as the load of aggregation is spread across more nodes. Simulations with other session sizes (4 and 16) yield similar results. Administrative Hierarchy and Robustness: Although the routing protocol of ADHT might lead to an increased number of hops to reach the root for a key as compared to original Pastry, the algorithm conforms to the path convergence and locality properties and thus provides administrative isolation property. In Figure 10, we quantify the increased path length by comparisons with unmodified Pastry for different sized networks with different branching factors of the domain hierarchy tree. To quantify the path convergence property, we perform simulations with a large number of probe pairs - each pair probing for a random key starting from two randomly chosen nodes. In Figure 11, we plot the percentage of probe pairs for unmodified pastry that do not conform to the path convergence property. When the branching factor is low, the domain hierarchy tree is deeper resulting in a large difference between 0 1 2 3 4 5 6 7 10 100 1000 10000 100000 PathLength Number of Nodes ADHT bf=4 ADHT bf=16 ADHT bf=64 PASTRY bf=4,16,64 Figure 10: Average path length to root in Pastry versus ADHT for different branching factors. Note that all lines corresponding to Pastry overlap. 0 2 4 6 8 10 12 14 16 10 100 1000 10000 100000 Percentageofviolations Number of Nodes bf=4 bf=16 bf=64 Figure 11: Percentage of probe pairs whose paths to the root did not conform to the path convergence property with Pastry. U pdate-All U pdate-U p U pdate-Local 0 200 400 600 800 Latency(inms) Average Latency U pdate-All U pdate-U p U pdate-Local 0 1000 2000 3000 Latency(inms) Average Latency (a) (b) Figure 12: Latency of probes for aggregate at global root level with three different modes of aggregate propagation on (a) department machines, and (b) PlanetLab machines Pastry and ADHT in the average path length; but it is at these small domain sizes, that the path convergence fails more often with the original Pastry. 7.2 Testbed experiments We run our prototype on 180 department machines (some machines ran multiple node instances, so this configuration has a total of 283 SDIMS nodes) and also on 69 machines of the PlanetLab [27] testbed. We measure the performance of our system with two micro-benchmarks. In the first micro-benchmark, we install three aggregation functions of types Update-Local, Update-Up, and Update-All, perform update operation on all nodes for all three aggregation functions, and measure the latencies incurred by probes for the global aggregate from all nodes in the system. Figure 12 387 0 20 40 60 80 100 120 140 0 5 10 15 20 25 2700 2720 2740 2760 2780 2800 2820 2840 Latency(inms) ValuesObserved Time(in sec) Values latency Node Killed Figure 13: Micro-benchmark on department network showing the behavior of the probes from a single node when failures are happening at some other nodes. All 283 nodes assign a value of 10 to the attribute. 10 100 1000 10000 100000 0 50 100 150 200 250 300 350 400 450 500 500 550 600 650 700 Latency(inms) ValuesObserved Time(in sec) Values latency Node Killed Figure 14: Probe performance during failures on 69 machines of PlanetLab testbed shows the observed latencies for both testbeds. Notice that the latency in Update-Local is high compared to the Update-UP policy. This is because latency in Update-Local is affected by the presence of even a single slow machine or a single machine with a high latency network connection. In the second benchmark, we examine robustness. We install one aggregation function of type Update-Up that performs sum operation on an integer valued attribute. Each node updates the attribute with the value 10. Then we monitor the latencies and results returned on the probe operation for global aggregate on one chosen node, while we kill some nodes after every few probes. Figure 13 shows the results on the departmental testbed. Due to the nature of the testbed (machines in a department), there is little change in the latencies even in the face of reconfigurations. In Figure 14, we present the results of the experiment on PlanetLab testbed. The root node of the aggregation tree is terminated after about 275 seconds. There is a 5X increase in the latencies after the death of the initial root node as a more distant node becomes the root node after repairs. In both experiments, the values returned on probes start reflecting the correct situation within a short time after the failures. From both the testbed benchmark experiments and the simulation experiments on flexibility and scalability, we conclude that (1) the flexibility provided by SDIMS allows applications to tradeoff read-write overheads (Figure 8), read latency, and sensitivity to slow machines (Figure 12), (2) a good default aggregation strategy is Update-Up which has moderate overheads on both reads and writes (Figure 8), has moderate read latencies (Figure 12), and is scalable with respect to both nodes and attributes (Figure 9), and (3) small domain sizes are the cases where DHT algorithms fail to provide path convergence more often and SDIMS ensures path convergence with only a moderate increase in path lengths (Figure 11). 7.3 Applications SDIMS is designed as a general distributed monitoring and control infrastructure for a broad range of applications. Above, we discuss some simple microbenchmarks including a multicast membership service and a calculate-sum function. Van Renesse et al. [38] provide detailed examples of how such a service can be used for a peer-to-peer caching directory, a data-diffusion service, a publishsubscribe system, barrier synchronization, and voting. Additionally, we have initial experience using SDIMS to construct two significant applications: the control plane for a large-scale distributed file system [12] and a network monitor for identifying heavy hitters that consume excess resources. Distributed file system control: The PRACTI (Partial Replication, Arbitrary Consistency, Topology Independence) replication system provides a set of mechanisms for data replication over which arbitrary control policies can be layered. We use SDIMS to provide several key functions in order to create a file system over the lowlevel PRACTI mechanisms. First, nodes use SDIMS as a directory to handle read misses. When a node n receives an object o, it updates the (ReadDir, o) attribute with the value n; when n discards o from its local store, it resets (ReadDir, o) to NULL. At each virtual node, the ReadDir aggregation function simply selects a random non-null child value (if any) and we use the Update-Up policy for propagating updates. Finally, to locate a nearby copy of an object o, a node n1 issues a series of probe requests for the (ReadDir, o) attribute, starting with level = 1 and increasing the level value with each repeated probe request until a non-null node ID n2 is returned. n1 then sends a demand read request to n2, and n2 sends the data if it has it. Conversely, if n2 does not have a copy of o, it sends a nack to n1, and n1 issues a retry probe with the down parameter set to a value larger than used in the previous probe in order to force on-demand re-aggregation, which will yield a fresher value for the retry. Second, nodes subscribe to invalidations and updates to interest sets of files, and nodes use SDIMS to set up and maintain perinterest-set network-topology-sensitive spanning trees for propagating this information. To subscribe to invalidations for interest set i, a node n1 first updates the (Inval, i) attribute with its identity n1, and the aggregation function at each virtual node selects one non-null child value. Finally, n1 probes increasing levels of the the (Inval, i) attribute until it finds the first node n2 = n1; n1 then uses n2 as its parent in the spanning tree. n1 also issues a continuous probe for this attribute at this level so that it is notified of any change to its spanning tree parent. Spanning trees for streams of pushed updates are maintained in a similar manner. In the future, we plan to use SDIMS for at least two additional services within this replication system. First, we plan to use SDIMS to track the read and write rates to different objects; prefetch algorithms will use this information to prioritize replication [40, 41]. Second, we plan to track the ranges of invalidation sequence numbers seen by each node for each interest set in order to augment the spanning trees described above with additional hole filling to allow nodes to locate specific invalidations they have missed. Overall, our initial experience with using SDIMS for the PRACTII replication system suggests that (1) the general aggregation interface provided by SDIMS simplifies the construction of distributed applications-given the low-level PRACTI mechanisms, 388 we were able to construct a basic file system that uses SDIMS for several distinct control tasks in under two weeks and (2) the weak consistency guarantees provided by SDIMS meet the requirements of this application-each node``s controller effectively treats information from SDIMS as hints, and if a contacted node does not have the needed data, the controller retries, using SDIMS on-demand reaggregation to obtain a fresher hint. Distributed heavy hitter problem: The goal of the heavy hitter problem is to identify network sources, destinations, or protocols that account for significant or unusual amounts of traffic. As noted by Estan et al. [13], this information is useful for a variety of applications such as intrusion detection (e.g., port scanning), denial of service detection, worm detection and tracking, fair network allocation, and network maintenance. Significant work has been done on developing high-performance stream-processing algorithms for identifying heavy hitters at one router, but this is just a first step; ideally these applications would like not just one router``s views of the heavy hitters but an aggregate view. We use SDIMS to allow local information about heavy hitters to be pooled into a view of global heavy hitters. For each destination IP address IPx, a node updates the attribute (DestBW,IPx) with the number of bytes sent to IPx in the last time window. The aggregation function for attribute type DestBW is installed with the Update-UP strategy and simply adds the values from child nodes. Nodes perform continuous probe for global aggregate of the attribute and raise an alarm when the global aggregate value goes above a specified limit. Note that only nodes sending data to a particular IP address perform probes for the corresponding attribute. Also note that techniques from [25] can be extended to hierarchical case to tradeoff precision for communication bandwidth. 8. RELATED WORK The aggregation abstraction we use in our work is heavily influenced by the Astrolabe [38] project. Astrolabe adopts a PropagateAll and unstructured gossiping techniques to attain robustness [5]. However, any gossiping scheme requires aggressive replication of the aggregates. While such aggressive replication is efficient for read-dominated attributes, it incurs high message cost for attributes with a small read-to-write ratio. Our approach provides a flexible API for applications to set propagation rules according to their read-to-write ratios. Other closely related projects include Willow [39], Cone [4], DASIS [1], and SOMO [45]. Willow, DASIS and SOMO build a single tree for aggregation. Cone builds a tree per attribute and requires a total order on the attribute values. Several academic [15, 21, 42] and commercial [37] distributed monitoring systems have been designed to monitor the status of large networked systems. Some of them are centralized where all the monitoring data is collected and analyzed at a central host. Ganglia [15, 23] uses a hierarchical system where the attributes are replicated within clusters using multicast and then cluster aggregates are further aggregated along a single tree. Sophia [42] is a distributed monitoring system designed with a declarative logic programming model where the location of query execution is both explicit in the language and can be calculated during evaluation. This research is complementary to our work. TAG [21] collects information from a large number of sensors along a single tree. The observation that DHTs internally provide a scalable forest of reduction trees is not new. Plaxton et al.``s [28] original paper describes not a DHT, but a system for hierarchically aggregating and querying object location data in order to route requests to nearby copies of objects. Many systems-building upon both Plaxton``s bit-correcting strategy [32, 46] and upon other strategies [24, 29, 35]-have chosen to hide this power and export a simple and general distributed hash table abstraction as a useful building block for a broad range of distributed applications. Some of these systems internally make use of the reduction forest not only for routing but also for caching [32], but for simplicity, these systems do not generally export this powerful functionality in their external interface. Our goal is to develop and expose the internal reduction forest of DHTs as a similarly general and useful abstraction. Although object location is a predominant target application for DHTs, several other applications like multicast [8, 9, 33, 36] and DNS [11] are also built using DHTs. All these systems implicitly perform aggregation on some attribute, and each one of them must be designed to handle any reconfigurations in the underlying DHT. With the aggregation abstraction provided by our system, designing and building of such applications becomes easier. Internal DHT trees typically do not satisfy domain locality properties required in our system. Castro et al. [7] and Gummadi et al. [17] point out the importance of path convergence from the perspective of achieving efficiency and investigate the performance of Pastry and other DHT algorithms, respectively. SkipNet [18] provides domain restricted routing where a key search is limited to the specified domain. This interface can be used to ensure path convergence by searching in the lowest domain and moving up to the next domain when the search reaches the root in the current domain. Although this strategy guarantees path convergence, it loses the aggregation tree abstraction property of DHTs as the domain constrained routing might touch a node more than once (as it searches forward and then backward to stay within a domain). 9. CONCLUSIONS This paper presents a Scalable Distributed Information Management System (SDIMS) that aggregates information in large-scale networked systems and that can serve as a basic building block for a broad range of applications. For large scale systems, hierarchical aggregation is a fundamental abstraction for scalability. We build our system by extending ideas from Astrolabe and DHTs to achieve (i) scalability with respect to both nodes and attributes through a new aggregation abstraction that helps leverage DHT``s internal trees for aggregation, (ii) flexibility through a simple API that lets applications control propagation of reads and writes, (iii) administrative isolation through simple augmentations of current DHT algorithms, and (iv) robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Acknowlegements We are grateful to J.C. Browne, Robert van Renessee, Amin Vahdat, Jay Lepreau, and the anonymous reviewers for their helpful comments on this work. 10. REFERENCES [1] K. Albrecht, R. Arnold, M. Gahwiler, and R. Wattenhofer. Join and Leave in Peer-to-Peer Systems: The DASIS approach. Technical report, CS, ETH Zurich, 2003. [2] G. Back, W. H. Hsieh, and J. Lepreau. Processes in KaffeOS: Isolation, Resource Management, and Sharing in Java. In Proc. OSDI, Oct 2000. [3] G. Banga, P. Druschel, and J. Mogul. Resource Containers: A New Facility for Resource Management in Server Systems. In OSDI99, Feb. 1999. [4] R. Bhagwan, P. Mahadevan, G. Varghese, and G. M. Voelker. Cone: A Distributed Heap-Based Approach to Resource Selection. Technical Report CS2004-0784, UCSD, 2004. 389 [5] K. P. Birman. The Surprising Power of Epidemic Communication. In Proceedings of FuDiCo, 2003. [6] B. Bloom. Space/time tradeoffs in hash coding with allowable errors. Comm. of the ACM, 13(7):422-425, 1970. [7] M. Castro, P. Druschel, Y. C. Hu, and A. Rowstron. Exploiting Network Proximity in Peer-to-Peer Overlay Networks. Technical Report MSR-TR-2002-82, MSR. [8] M. Castro, P. Druschel, A.-M. Kermarrec, A. Nandi, A. Rowstron, and A. Singh. SplitStream: High-bandwidth Multicast in a Cooperative Environment. In SOSP, 2003. [9] M. Castro, P. Druschel, A.-M. Kermarrec, and A. Rowstron. SCRIBE: A Large-scale and Decentralised Application-level Multicast Infrastructure. IEEE JSAC (Special issue on Network Support for Multicast Communications), 2002. [10] J. Challenger, P. Dantzig, and A. Iyengar. A scalable and highly available system for serving dynamic data at frequently accessed web sites. In In Proceedings of ACM/IEEE, Supercomputing ``98 (SC98), Nov. 1998. [11] R. Cox, A. Muthitacharoen, and R. T. Morris. Serving DNS using a Peer-to-Peer Lookup Service. In IPTPS, 2002. [12] M. Dahlin, L. Gao, A. Nayate, A. Venkataramani, P. Yalagandula, and J. Zheng. PRACTI replication for large-scale systems. Technical Report TR-04-28, The University of Texas at Austin, 2004. [13] C. Estan, G. Varghese, and M. Fisk. Bitmap algorithms for counting active flows on high speed links. In Internet Measurement Conference 2003, 2003. [14] Y. Fu, J. Chase, B. Chun, S. Schwab, and A. Vahdat. SHARP: An architecture for secure resource peering. In Proc. SOSP, Oct. 2003. [15] Ganglia: Distributed Monitoring and Execution System. http://ganglia.sourceforge.net. [16] S. Gribble, A. Halevy, Z. Ives, M. Rodrig, and D. Suciu. What Can Peer-to-Peer Do for Databases, and Vice Versa? In Proceedings of the WebDB, 2001. [17] K. Gummadi, R. Gummadi, S. D. Gribble, S. Ratnasamy, S. Shenker, and I. Stoica. The Impact of DHT Routing Geometry on Resilience and Proximity. In SIGCOMM, 2003. [18] N. J. A. Harvey, M. B. Jones, S. Saroiu, M. Theimer, and A. Wolman. SkipNet: A Scalable Overlay Network with Practical Locality Properties. In USITS, March 2003. [19] R. Huebsch, J. M. Hellerstein, N. Lanham, B. T. Loo, S. Shenker, and I. Stoica. Querying the Internet with PIER. In Proceedings of the VLDB Conference, May 2003. [20] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed diffusion: a scalable and robust communication paradigm for sensor networks. In MobiCom, 2000. [21] S. R. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. TAG: a Tiny AGgregation Service for ad-hoc Sensor Networks. In OSDI, 2002. [22] D. Malkhi. Dynamic Lookup Networks. In FuDiCo, 2002. [23] M. L. Massie, B. N. Chun, and D. E. Culler. The ganglia distributed monitoring system: Design, implementation, and experience. In submission. [24] P. Maymounkov and D. Mazieres. Kademlia: A Peer-to-peer Information System Based on the XOR Metric. In Proceesings of the IPTPS, March 2002. [25] C. Olston and J. Widom. Offering a precision-performance tradeoff for aggregation queries over replicated data. In VLDB, pages 144-155, Sept. 2000. [26] K. Petersen, M. Spreitzer, D. Terry, M. Theimer, and A. Demers. Flexible Update Propagation for Weakly Consistent Replication. In Proc. SOSP, Oct. 1997. [27] Planetlab. http://www.planet-lab.org. [28] C. G. Plaxton, R. Rajaraman, and A. W. Richa. Accessing Nearby Copies of Replicated Objects in a Distributed Environment. In ACM SPAA, 1997. [29] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker. A Scalable Content Addressable Network. In Proceedings of ACM SIGCOMM, 2001. [30] S. Ratnasamy, S. Shenker, and I. Stoica. Routing Algorithms for DHTs: Some Open Questions. In IPTPS, March 2002. [31] T. Roscoe, R. Mortier, P. Jardetzky, and S. Hand. InfoSpect: Using a Logic Language for System Health Monitoring in Distributed Systems. In Proceedings of the SIGOPS European Workshop, 2002. [32] A. Rowstron and P. Druschel. Pastry: Scalable, Distributed Object Location and Routing for Large-scale Peer-to-peer Systems. In Middleware, 2001. [33] S.Ratnasamy, M.Handley, R.Karp, and S.Shenker. Application-level Multicast using Content-addressable Networks. In Proceedings of the NGC, November 2001. [34] W. Stallings. SNMP, SNMPv2, and CMIP. Addison-Wesley, 1993. [35] I. Stoica, R. Morris, D. Karger, F. Kaashoek, and H. Balakrishnan. Chord: A scalable Peer-To-Peer lookup service for internet applications. In ACM SIGCOMM, 2001. [36] S.Zhuang, B.Zhao, A.Joseph, R.Katz, and J.Kubiatowicz. Bayeux: An Architecture for Scalable and Fault-tolerant Wide-Area Data Dissemination. In NOSSDAV, 2001. [37] IBM Tivoli Monitoring. www.ibm.com/software/tivoli/products/monitor. [38] R. VanRenesse, K. P. Birman, and W. Vogels. Astrolabe: A Robust and Scalable Technology for Distributed System Monitoring, Management, and Data Mining. TOCS, 2003. [39] R. VanRenesse and A. Bozdog. Willow: DHT, Aggregation, and Publish/Subscribe in One Protocol. In IPTPS, 2004. [40] A. Venkataramani, P. Weidmann, and M. Dahlin. Bandwidth constrained placement in a wan. In PODC, Aug. 2001. [41] A. Venkataramani, P. Yalagandula, R. Kokku, S. Sharif, and M. Dahlin. Potential costs and benefits of long-term prefetching for content-distribution. Elsevier Computer Communications, 25(4):367-375, Mar. 2002. [42] M. Wawrzoniak, L. Peterson, and T. Roscoe. Sophia: An Information Plane for Networked Systems. In HotNets-II, 2003. [43] R. Wolski, N. Spring, and J. Hayes. The network weather service: A distributed resource performance forecasting service for metacomputing. Journal of Future Generation Computing Systems, 15(5-6):757-768, Oct 1999. [44] P. Yalagandula and M. Dahlin. SDIMS: A scalable distributed information management system. Technical Report TR-03-47, Dept. of Computer Sciences, UT Austin, Sep 2003. [45] Z. Zhang, S.-M. Shi, and J. Zhu. SOMO: Self-Organized Metadata Overlay for Resource Management in P2P DHT. In IPTPS, 2003. [46] B. Y. Zhao, J. D. Kubiatowicz, and A. D. Joseph. Tapestry: An Infrastructure for Fault-tolerant Wide-area Location and Routing. Technical Report UCB/CSD-01-1141, UC Berkeley, Apr. 2001. 390
A Scalable Distributed Information Management System * We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures. 1. INTRODUCTION The goal of this research is to design and build a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications. Monitoring, querying, and reacting to changes in the state of a distributed system are core components of applications such as system management [15, 31, 37, 42], service placement [14, 43], data sharing and caching [18, 29, 32, 35, 46], sensor monitoring and control [20, 21], multicast tree formation [8, 9, 33, 36, 38], and naming and request routing [10, 11]. We therefore speculate that a SDIMS in a networked system would provide a "distributed operating systems backbone" and facilitate the development and deployment of new distributed services. For a large scale information system, hierarchical aggregation is a fundamental abstraction for scalability. Rather than expose all information to all nodes, hierarchical aggregation allows a node to access detailed views of nearby information and summary views of global information. In a SDIMS based on hierarchical aggregation, different nodes can therefore receive different answers to the query "find a [nearby] node with at least 1 GB of free memory" or "find a [nearby] copy of file foo." A hierarchical system that aggregates information through reduction trees [21, 38] allows nodes to access information they care about while maintaining system scalability. To be used as a basic building block, a SDIMS should have four properties. First, the system should be scalable: it should accommodate large numbers of participating nodes, and it should allow applications to install and monitor large numbers of data attributes. Enterprise and global scale systems today might have tens of thousands to millions of nodes and these numbers will increase over time. Similarly, we hope to support many applications, and each application may track several attributes (e.g., the load and free memory of a system's machines) or millions of attributes (e.g., which files are stored on which machines). Second, the system should have flexibility to accommodate a broad range of applications and attributes. For example, readdominated attributes like numCPUs rarely change in value, while write-dominated attributes like numProcesses change quite often. An approach tuned for read-dominated attributes will consume high bandwidth when applied to write-dominated attributes. Conversely, an approach tuned for write-dominated attributes will suffer from unnecessary query latency or imprecision for read-dominated attributes. Therefore, a SDIMS should provide mechanisms to handle different types of attributes and leave the policy decision of tuning replication to the applications. Third, a SDIMS should provide administrative isolation. In a large system, it is natural to arrange nodes in an organizational or an administrative hierarchy. A SDIMS should support administra tive isolation in which queries about an administrative domain's information can be satisfied within the domain so that the system can operate during disconnections from other domains, so that an external observer cannot monitor or affect intra-domain queries, and to support domain-scoped queries efficiently. Fourth, the system must be robust to node failures and disconnections. A SDIMS should adapt to reconfigurations in a timely fashion and should also provide mechanisms so that applications can tradeoff the cost of adaptation with the consistency level in the aggregated results when reconfigurations occur. We draw inspiration from two previous works: Astrolabe [38] and Distributed Hash Tables (DHTs). Astrolabe [38] is a robust information management system. Astrolabe provides the abstraction of a single logical aggregation tree that mirrors a system's administrative hierarchy. It provides a general interface for installing new aggregation functions and provides eventual consistency on its data. Astrolabe is robust due to its use of an unstructured gossip protocol for disseminating information and its strategy of replicating all aggregated attribute values for a subtree to all nodes in the subtree. This combination allows any communication pattern to yield eventual consistency and allows any node to answer any query using local information. This high degree of replication, however, may limit the system's ability to accommodate large numbers of attributes. Also, although the approach works well for read-dominated attributes, an update at one node can eventually affect the state at all nodes, which may limit the system's flexibility to support write-dominated attributes. Recent research in peer-to-peer structured networks resulted in Distributed Hash Tables (DHTs) [18, 28, 29, 32, 35, 46]--a data structure that scales with the number of nodes and that distributes the read-write load for different queries among the participating nodes. It is interesting to note that although these systems export a global hash table abstraction, many of them internally make use of what can be viewed as a scalable system of aggregation trees to, for example, route a request for a given key to the right DHT node. Indeed, rather than export a general DHT interface, Plaxton et al.'s [28] original application makes use of hierarchical aggregation to allow nodes to locate nearby copies of objects. It seems appealing to develop a SDIMS abstraction that exposes this internal functionality in a general way so that scalable trees for aggregation can be a basic system building block alongside the DHTs. At a first glance, it might appear to be obvious that simply fusing DHTs with Astrolabe's aggregation abstraction will result in a SDIMS. However, meeting the SDIMS requirements forces a design to address four questions: (1) How to scalably map different attributes to different aggregation trees in a DHT mesh? (2) How to provide flexibility in the aggregation to accommodate different application requirements? (3) How to adapt a global, flat DHT mesh to attain administrative isolation property? and (4) How to provide robustness without unstructured gossip and total replication? The key contributions of this paper that form the foundation of our SDIMS design are as follows. 1. We define a new aggregation abstraction that specifies both attribute type and attribute name and that associates an aggregation function with a particular attribute type. This abstraction paves the way for utilizing the DHT system's internal trees for aggregation and for achieving scalability with both nodes and attributes. 2. We provide a flexible API that lets applications control the propagation of reads and writes and thus trade off update cost, read latency, replication, and staleness. 3. We augment an existing DHT algorithm to ensure path convergence and path locality properties in order to achieve administrative isolation. 4. We provide robustness to node and network reconfigurations by (a) providing temporal replication through lazy reaggre gation that guarantees eventual consistency and (b) ensuring that our flexible API allows demanding applications gain additional robustness by using tunable spatial replication of data aggregates or by performing fast on-demand reaggregation to augment the underlying lazy reaggregation or by doing both. We have built a prototype of SDIMS. Through simulations and micro-benchmark experiments on a number of department machines and PlanetLab [27] nodes, we observe that the prototype achieves scalability with respect to both nodes and attributes through use of its flexible API, inflicts an order of magnitude lower maximum node stress than unstructured gossiping schemes, achieves isolation properties at a cost of modestly increased read latency compared to flat DHTs, and gracefully handles node failures. This initial study discusses key aspects of an ongoing system building effort, but it does not address all issues in building a SDIMS. For example, we believe that our strategies for providing robustness will mesh well with techniques such as supernodes [22] and other ongoing efforts to improve DHTs [30] for further improving robustness. Also, although splitting aggregation among many trees improves scalability for simple queries, this approach may make complex and multi-attribute queries more expensive compared to a single tree. Additional work is needed to understand the significance of this limitation for real workloads and, if necessary, to adapt query planning techniques from DHT abstractions [16, 19] to scalable aggregation tree abstractions. In Section 2, we explain the hierarchical aggregation abstraction that SDIMS provides to applications. In Sections 3 and 4, we describe the design of our system for achieving the flexibility, scalability, and administrative isolation requirements of a SDIMS. In Section 5, we detail the implementation of our prototype system. Section 6 addresses the issue of adaptation to the topological reconfigurations. In Section 7, we present the evaluation of our system through large-scale simulations and microbenchmarks on real networks. Section 8 details the related work, and Section 9 summarizes our contribution. 2. AGGREGATION ABSTRACTION Aggregation is a natural abstraction for a large-scale distributed information system because aggregation provides scalability by allowing a node to view detailed information about the state near it and progressively coarser-grained summaries about progressively larger subsets of a system's data [38]. Our aggregation abstraction is defined across a tree spanning all nodes in the system. Each physical node in the system is a leaf and each subtree represents a logical group of nodes. Note that logical groups can correspond to administrative domains (e.g., department or university) or groups of nodes within a domain (e.g., 10 workstations on a LAN in CS department). An internal non-leaf node, which we call virtual node, is simulated by one or more physical nodes at the leaves of the subtree for which the virtual node is the root. We describe how to form such trees in a later section. Each physical node has local data stored as a set of (attributeType, attributeName, value) tuples such as (configuration, numCPUs, 16), (mcast membership, session foo, yes), or (file stored, foo, myIPaddress). The system associates an aggregation function ftype with each attribute type, and for each level-i subtree Ti in the system, the system defines an aggregate value Vi, type, name for each (at tributeType, attributeName) pair as follows. For a (physical) leaf node T0 at level 0, V0, type, name is the locally stored value for the attribute type and name or NULL if no matching tuple exists. Then the aggregate value for a level-i subtree Ti is the aggregation function for the type, ftype computed across the aggregate values of each of Ti's k children: Although SDIMS allows arbitrary aggregation functions, it is often desirable that these functions satisfy the hierarchical computation property [21]: f (v1,..., vn) = f (f (v1,..., vs1), f (vs1 +1,..., vs2),..., f (vsk +1,..., vn)), where vi is the value of an attribute at node i. For example, the average operation, defined as avg (v1,..., vn) = 1/n. ∑ n i = 0 vi, does not satisfy the property. Instead, if an attribute stores values as tuples (sum, count), the attribute satisfies the hierarchical computation property while still allowing the applications to compute the average from the aggregate sum and count values. Finally, note that for a large-scale system, it is difficult or impossible to insist that the aggregation value returned by a probe corresponds to the function computed over the current values at the leaves at the instant of the probe. Therefore our system provides only weak consistency guarantees--specifically eventual consistency as defined in [38]. 3. FLEXIBILITY A major innovation of our work is enabling flexible aggregate computation and propagation. The definition of the aggregation abstraction allows considerable flexibility in how, when, and where aggregate values are computed and propagated. While previous systems [15, 29, 38, 32, 35, 46] implement a single static strategy, we argue that a SDIMS should provide flexible computation and propagation to efficiently support wide variety of applications with diverse requirements. In order to provide this flexibility, we develop a simple interface that decomposes the aggregation abstraction into three pieces of functionality: install, update, and probe. This definition of the aggregation abstraction allows our system to provide a continuous spectrum of strategies ranging from lazy aggregate computation and propagation on reads to aggressive immediate computation and propagation on writes. In Figure 1, we illustrate both extreme strategies and an intermediate strategy. Under the lazy Update-Local computation and propagation strategy, an update (or write) only affects local state. Then, a probe (or read) that reads a level-i aggregate value is sent up the tree to the issuing node's level-i ancestor and then down the tree to the leaves. The system then computes the desired aggregate value at each layer up the tree until the level-i ancestor that holds the desired value. Finally, the level-i ancestor sends the result down the tree to the issuing node. In the other extreme case of the aggressive Update-All immediate computation and propagation on writes [38], when an update occurs, changes are aggregated up the tree, and each new aggregate value is flooded to all of a node's descendants. In this case, each level-i node not only maintains the aggregate values for the level-i subtree but also receives and locally stores copies of all of its ancestors' level-j (j> i) aggregation values. Also, a leaf satisfies a probe for a level-i aggregate using purely local data. In an intermediate Update-Up strategy, the root of each subtree maintains the subtree's current aggregate value, and when an update occurs, the leaf node updates its local state and passes the update to its parent, and then each successive enclosing subtree updates its aggregate value and passes the new value to its parent. This strategy satisfies a leaf's probe for a level-i aggregate value by sending the probe up to the level-i ancestor of the leaf and then sending the aggregate value down to the leaf. Finally, notice that other strategies exist. In general, an Update-Upk-Downj strategy aggregates up to Table 1: Arguments for the install operation the kth level and propagates the aggregate values of a node at level l (s.t. l ≤ k) downward for j levels. A SDIMS must provide a wide range of flexible computation and propagation strategies to applications for it to be a general abstraction. An application should be able to choose a particular mechanism based on its read-to-write ratio that reduces the bandwidth consumption while attaining the required responsiveness and precision. Note that the read-to-write ratio of the attributes that applications install vary extensively. For example, a read-dominated attribute like numCPUs rarely changes in value, while a writedominated attribute like numProcesses changes quite often. An aggregation strategy like Update-All works well for read-dominated attributes but suffers high bandwidth consumption when applied for write-dominated attributes. Conversely, an approach like UpdateLocal works well for write-dominated attributes but suffers from unnecessary query latency or imprecision for read-dominated attributes. SDIMS also allows non-uniform computation and propagation across the aggregation tree with different up and down parameters in different subtrees so that applications can adapt with the spatial and temporal heterogeneity of read and write operations. With respect to spatial heterogeneity, access patterns may differ for different parts of the tree, requiring different propagation strategies for different parts of the tree. Similarly with respect to temporal heterogeneity, access patterns may change over time requiring different strategies over time. 3.1 Aggregation API We provide the flexibility described above by splitting the aggregation API into three functions: Install () installs an aggregation function that defines an operation on an attribute type and specifies the update strategy that the function will use, Update () inserts or modifies a node's local value for an attribute, and Probe () obtains an aggregate value for a specified subtree. The install interface allows applications to specify the k and j parameters of the Update-Upk-Downj strategy along with the aggregation function. The update interface invokes the aggregation of an attribute on the tree according to corresponding aggregation function's aggregation strategy. The probe interface not only allows applications to obtain the aggregated value for a specified tree but also allows a probing node to continuously fetch the values for a specified time, thus enabling an application to adapt to spatial and temporal heterogeneity. The rest of the section describes these three interfaces in detail. 3.1.1 Install The Install operation installs an aggregation function in the system. The arguments for this operation are listed in Table 1. The attrType argument denotes the type of attributes on which this aggregation function is invoked. Installed functions are soft state that must be periodically renewed or they will be garbage collected at expTime. The arguments up and down specify the aggregate computation Figure 1: Flexible API Table 2: Arguments for the probe operation and propagation strategy Update-Upk-Downj. The domain argument, if present, indicates that the aggregation function should be installed on all nodes in the specified domain; otherwise the function is installed on all nodes in the system. 3.1.2 Update The Update operation takes three arguments attrType, attrName, and value and creates a new (attrType, attrName, value) tuple or updates the value of an old tuple with matching attrType and attrName at a leaf node. The update interface meshes with installed aggregate computation and propagation strategy to provide flexibility. In particular, as outlined above and described in detail in Section 5, after a leaf applies an update locally, the update may trigger re-computation of aggregate values up the tree and may also trigger propagation of changed aggregate values down the tree. Notice that our abstraction associates an aggregation function with only an attrType but lets updates specify an attrName along with the attrType. This technique helps achieve scalability with respect to nodes and attributes as described in Section 4. 3.1.3 Probe The Probe operation returns the value of an attribute to an application. The complete argument set for the probe operation is shown in Table 2. Along with the attrName and the attrType arguments, a level argument specifies the level at which the answers are required for an attribute. In our implementation we choose to return results at all levels k <l for a level-l probe because (i) it is inexpensive as the nodes traversed for level-l probe also contain level k aggregates for k <l and as we expect the network cost of transmitting the additional information to be small for the small aggregates which we focus and (ii) it is useful as applications can efficiently get several aggregates with a single probe (e.g., for domain-scoped queries as explained in Section 4.2). Probes with mode set to continuous and with finite expTime enable applications to handle spatial and temporal heterogeneity. When node A issues a continuous probe at level l for an attribute, then regardless of the up and down parameters, updates for the attribute at any node in A's level-l ancestor's subtree are aggregated up to level l and the aggregated value is propagated down along the path from the ancestor to A. Note that continuous mode enables SDIMS to support a distributed sensor-actuator mechanism where a sensor monitors a level-i aggregate with a continuous mode probe and triggers an actuator upon receiving new values for the probe. The up and down arguments enable applications to perform ondemand fast re-aggregation during reconfigurations, where a forced re-aggregation is done for the corresponding levels even if the aggregated value is available, as we discuss in Section 6. When present, the up and down arguments are interpreted as described in the install operation. 3.1.4 Dynamic Adaptation At the API level, the up and down arguments in install API can be regarded as hints, since they suggest a computation strategy but do not affect the semantics of an aggregation function. A SDIMS implementation can dynamically adjust its up/down strategies for an attribute based on its measured read/write frequency. But a virtual intermediate node needs to know the current up and down propagation values to decide if the local aggregate is fresh in order to answer a probe. This is the key reason why up and down need to be statically defined at the install time and cannot be specified in the update operation. In dynamic adaptation, we implement a leasebased mechanism where a node issues a lease to a parent or a child denoting that it will keep propagating the updates to that parent or child. We are currently evaluating different policies to decide when to issue a lease and when to revoke a lease. 4. SCALABILITY Our design achieves scalability with respect to both nodes and attributes through two key ideas. First, it carefully defines the aggregation abstraction to mesh well with its underlying scalable DHT system. Second, it refines the basic DHT abstraction to form an Autonomous DHT (ADHT) to achieve the administrative isolation properties that are crucial to scaling for large real-world systems. In this section, we describe these two ideas in detail. 4.1 Leveraging DHTs In contrast to previous systems [4, 15, 38, 39, 45], SDIMS's aggregation abstraction specifies both an attribute type and attribute name and associates an aggregation function with a type rather than just specifying and associating a function with a name. Installing a single function that can operate on many different named attributes matching a type improves scalability for "sparse attribute types" with large, sparsely-filled name spaces. For example, to construct a file location service, our interface allows us to install a single function that computes an aggregate value for any named file. A subtree's aggregate value for (FILELOC, name) would be the ID of a node in the subtree that stores the named file. Conversely, Astrolabe copes with sparse attributes by having aggregation functions compute sets or lists and suggests that scalability can be improved by representing such sets with Bloom filters [6]. Supporting sparse names within a type provides at least two advantages. First, when the value associated with a name is updated, only the state associ Figure 2: The DHT tree corresponding to key 111 (DHTtree111) and the corresponding aggregation tree. ated with that name needs to be updated and propagated to other nodes. Second, splitting values associated with different names into different aggregation values allows our system to leverage Distributed Hash Tables (DHTs) to map different names to different trees and thereby spread the function's logical root node's load and state across multiple physical nodes. Given this abstraction, scalably mapping attributes to DHTs is straightforward. DHT systems assign a long, random ID to each node and define an algorithm to route a request for key k to a node rootk such that the union of paths from all nodes forms a tree DHTtreek rooted at the node rootk. Now, as illustrated in Figure 2, by aggregating an attribute along the aggregation tree corresponding to DHTtreek for k = hash (attribute type, attribute name), different attributes will be aggregated along different trees. In comparison to a scheme where all attributes are aggregated along a single tree, aggregating along multiple trees incurs lower maximum node stress: whereas in a single aggregation tree approach, the root and the intermediate nodes pass around more messages than leaf nodes, in a DHT-based multi-tree, each node acts as an intermediate aggregation point for some attributes and as a leaf node for other attributes. Hence, this approach distributes the onus of aggregation across all nodes. 4.2 Administrative Isolation Aggregation trees should provide administrative isolation by ensuring that for each domain, the virtual node at the root of the smallest aggregation subtree containing all nodes of that domain is hosted by a node in that domain. Administrative isolation is important for three reasons: (i) for security--so that updates and probes flowing in a domain are not accessible outside the domain, (ii) for availability--so that queries for values in a domain are not affected by failures of nodes in other domains, and (iii) for efficiency--so that domain-scoped queries can be simple and efficient. To provide administrative isolation to aggregation trees, a DHT should satisfy two properties: 1. Path Locality: Search paths should always be contained in the smallest possible domain. 2. Path Convergence: Search paths for a key from different nodes in a domain should converge at a node in that domain. Existing DHTs support path locality [18] or can easily support it by using the domain nearness as the distance metric [7, 17], but they do not guarantee path convergence as those systems try to optimize the search path to the root to reduce response latency. For example, Pastry [32] uses prefix routing in which each node's routing table contains one row per hexadecimal digit in the nodeId space where the ith row contains a list of nodes whose nodeIds differ from the current node's nodeId in the ith digit with one entry for each possible digit value. Given a routing topology, to route a packet to an arbitrary destination key, a node in Pastry forwards a packet to the node with a nodeId prefix matching the key in at least one more digit than the current node. If such a node is not known, the current node uses an additional data structure, the leaf set containing Figure 3: Example shows how isolation property is violated with original Pastry. We also show the corresponding aggregation tree. Figure 4: Autonomous DHT satisfying the isolation property. Also the corresponding aggregation tree is shown. L immediate higher and lower neighbors in the nodeId space, and forwards the packet to a node with an identical prefix but that is numerically closer to the destination key in the nodeId space. This process continues until the destination node appears in the leaf set, after which the message is routed directly. Pastry's expected number of routing steps is log n, where n is the number of nodes, but as Figure 3 illustrates, this algorithm does not guarantee path convergence: if two nodes in a domain have nodeIds that match a key in the same number of bits, both of them can route to a third node outside the domain when routing for that key. Simple modifications to Pastry's route table construction and key-routing protocols yield an Autonomous DHT (ADHT) that satisfies the path locality and path convergence properties. As Figure 4 illustrates, whenever two nodes in a domain share the same prefix with respect to a key and no other node in the domain has a longer prefix, our algorithm introduces a virtual node at the boundary of the domain corresponding to that prefix plus the next digit of the key; such a virtual node is simulated by the existing node whose id is numerically closest to the virtual node's id. Our ADHT's routing table differs from Pastry's in two ways. First, each node maintains a separate leaf set for each domain of which it is a part. Second, nodes use two proximity metrics when populating the routing tables--hierarchical domain proximity is the primary metric and network distance is secondary. Then, to route a packet to a global root for a key, ADHT routing algorithm uses the routing table and the leaf set entries to route to each successive enclosing domain's root (the virtual or real node in the domain matching the key in the maximum number of digits). Additional details about the ADHT algorithm are available in an extended technical report [44]. Properties. Maintaining a different leaf set for each administrative hierarchy level increases the number of neighbors that each node tracks to (2b) * lgb n + c.l from (2b) * lgb n + c in unmodified Pastry, where b is the number of bits in a digit, n is the number of nodes, c is the leaf set size, and l is the number of domain levels. Routing requires O (lgbn + l) steps compared to O (lgbn) steps in Pastry; also, each routing hop may be longer than in Pastry because the modified algorithm's routing table prefers same-domain nodes over nearby nodes. We experimentally quantify the additional routing costs in Section 7. In a large system, the ADHT topology allows domains to im Figure 5: Example for domain-scoped queries prove security for sensitive attribute types by installing them only within a specified domain. Then, aggregation occurs entirely within the domain and a node external to the domain can neither observe nor affect the updates and aggregation computations of the attribute type. Furthermore, though we have not implemented this feature in the prototype, the ADHT topology would also support domainrestricted probes that could ensure that no one outside of a domain can observe a probe for data stored within the domain. The ADHT topology also enhances availability by allowing the common case of probes for data within a domain to depend only on a domain's nodes. This, for example, allows a domain that becomes disconnected from the rest of the Internet to continue to answer queries for local data. Aggregation trees that provide administrative isolation also enable the definition of simple and efficient domain-scoped aggregation functions to support queries like "what is the average load on machines in domain X?" For example, consider an aggregation function to count the number of machines in an example system with three machines illustrated in Figure 5. Each leaf node l updates attribute NumMachines with a value vl containing a set of tuples of form (Domain, Count) for each domain of which the node is a part. In the example, the node A1 with name A1.A. performs an update with the value ((A1.A.,1), (A.,1), (. ,1)). An aggregation function at an internal virtual node hosted on node N with child set C computes the aggregate as a set of tuples: for each domain D that N is part of, form a tuple (D, ∑ c ∈ C (count | (D, count) ∈ vc)). This computation is illustrated in the Figure 5. Now a query for NumMachines with level set to MAX will return the aggregate values at each intermediate virtual node on the path to the root as a set of tuples (tree level, aggregated value) from which it is easy to extract the count of machines at each enclosing domain. For example, A1 would receive ((2, ((B1.B.,1), (B.,1), (. ,3))), (1, ((A1.A.,1), (A.,2), (. ,2))), (0, ((A1.A.,1), (A.,1), (. ,1)))). Note that supporting domain-scoped queries would be less convenient and less efficient if aggregation trees did not conform to the system's administrative structure. It would be less efficient because each intermediate virtual node will have to maintain a list of all values at the leaves in its subtree along with their names and it would be less convenient as applications that need an aggregate for a domain will have to pick values of nodes in that domain from the list returned by a probe and perform computation. 5. PROTOTYPE IMPLEMENTATION The internal design of our SDIMS prototype comprises of two layers: the Autonomous DHT (ADHT) layer manages the overlay topology of the system and the Aggregation Management Layer (AML) maintains attribute tuples, performs aggregations, stores and propagates aggregate values. Given the ADHT construction described in Section 4.2, each node implements an Aggregation Management Layer (AML) to support the flexible API described in Section 3. In this section, we describe the internal state and operation of the AML layer of a node in the system. Figure 6: Example illustrating the data structures and the organization of them at a node. We refer to a store of (attribute type, attribute name, value) tuples as a Management Information Base or MIB, following the terminology from Astrolabe [38] and SNMP [34]. We refer an (attribute type, attribute name) tuple as an attribute key. As Figure 6 illustrates, each physical node in the system acts as several virtual nodes in the AML: a node acts as leaf for all attribute keys, as a level-1 subtree root for keys whose hash matches the node's ID in b prefix bits (where b is the number of bits corrected in each step of the ADHT's routing scheme), as a level-i subtree root for attribute keys whose hash matches the node's ID in the initial i ∗ b bits, and as the system's global root for attribute keys whose hash matches the node's ID in more prefix bits than any other node (in case of a tie, the first non-matching bit is ignored and the comparison is continued [46]). To support hierarchical aggregation, each virtual node at the root of a level-i subtree maintains several MIBs that store (1) child MIBs containing raw aggregate values gathered from children, (2) a reduction MIB containing locally aggregated values across this raw information, and (3) an ancestor MIB containing aggregate values scattered down from ancestors. This basic strategy of maintaining child, reduction, and ancestor MIBs is based on Astrolabe [38], but our structured propagation strategy channels information that flows up according to its attribute key and our flexible propagation strategy only sends child updates up and ancestor aggregate results down as far as specified by the attribute key's aggregation function. Note that in the discussion below, for ease of explanation, we assume that the routing protocol is correcting single bit at a time (b = 1). Our system, built upon Pastry, handles multi-bit correction (b = 4) and is a simple extension to the scheme described here. For a given virtual node ni at level i, each child MIB contains the subset of a child's reduction MIB that contains tuples that match ni's node ID in i bits and whose up aggregation function attribute is at least i. These local copies make it easy for a node to recompute a level-i aggregate value when one child's input changes. Nodes maintain their child MIBs in stable storage and use a simplified version of the Bayou log exchange protocol (sans conflict detection and resolution) for synchronization after disconnections [26]. Virtual node ni at level i maintains a reduction MIB of tuples with a tuple for each key present in any child MIB containing the attribute type, attribute name, and output of the attribute type's aggregate functions applied to the children's tuples. A virtual node ni at level i also maintains an ancestor MIB to store the tuples containing attribute key and a list of aggregate values at different levels scattered down from ancestors. Note that the list for a key might contain multiple aggregate values for a same level but aggregated at different nodes (see Figure 4). So, the aggregate values are tagged not only with level information, but are also tagged with ID of the node that performed the aggregation. Level-0 differs slightly from other levels. Each level-0 leaf node maintains a local MIB rather than maintaining child MIBs and a reduction MIB. This local MIB stores information about the local node's state inserted by local applications via update () calls. We envision various "sensor" programs and applications insert data into local MIB. For example, one program might monitor local configuration and perform updates with information such as total memory, free memory, etc., A distributed file system might perform update for each file stored on the local node. Along with these MIBs, a virtual node maintains two other tables: an aggregation function table and an outstanding probes table. An aggregation function table contains the aggregation function and installation arguments (see Table 1) associated with an attribute type or an attribute type and name. Each aggregate function is installed on all nodes in a domain's subtree, so the aggregate function table can be thought of as a special case of the ancestor MIB with domain functions always installed up to a root within a specified domain and down to all nodes within the domain. The outstanding probes table maintains temporary information regarding in-progress probes. Given these data structures, it is simple to support the three API functions described in Section 3.1. Install The Install operation (see Table 1) installs on a domain an aggregation function that acts on a specified attribute type. Execution of an install operation for function aggrFunc on attribute type attrType proceeds in two phases: first the install request is passed up the ADHT tree with the attribute key (attrType, null) until it reaches the root for that key within the specified domain. Then, the request is flooded down the tree and installed on all intermediate and leaf nodes. Update When a level i virtual node receives an update for an attribute from a child below: it first recomputes the level-i aggregate value for the specified key, stores that value in its reduction MIB and then, subject to the function's up and domain parameters, passes the updated value to the appropriate parent based on the attribute key. Also, the level-i (i> 1) virtual node sends the updated level-i aggregate to all its children if the function's down parameter exceeds zero. Upon receipt of a level-i aggregate from a parent, a level k virtual node stores the value in its ancestor MIB and, if k> i − down, forwards this aggregate to its children. Probe A Probe collects and returns the aggregate value for a specified attribute key for a specified level of the tree. As Figure 1 illustrates, the system satisfies a probe for a level-i aggregate value using a four-phase protocol that may be short-circuited when updates have previously propagated either results or partial results up or down the tree. In phase 1, the route probe phase, the system routes the probe up the attribute key's tree to either the root of the level-i subtree or to a node that stores the requested value in its ancestor MIB. In the former case, the system proceeds to phase 2 and in the latter it skips to phase 4. In phase 2, the probe scatter phase, each node that receives a probe request sends it to all of its children unless the node's reduction MIB already has a value that matches the probe's attribute key, in which case the node initiates phase 3 on behalf of its subtree. In phase 3, the probe aggregation phase, when a node receives values for the specified key from each of its children, it executes the aggregate function on these values and either (a) forwards the result to its parent (if its level is less than i) or (b) initiates phase 4 (if it is at level i). Finally, in phase 4, the aggregate routing phase the aggregate value is routed down to the node that requested it. Note that in the extreme case of a function installed with up = down = 0, a level-i probe can touch all nodes in a level-i subtree while in the opposite extreme case of a function installed with up = down = ALL, probe is a completely local operation at a leaf. For probes that include phases 2 (probe scatter) and 3 (probe aggregation), an issue is how to decide when a node should stop waiting for its children to respond and send up its current aggregate value. A node stops waiting for its children when one of three conditions occurs: (1) all children have responded, (2) the ADHT layer signals one or more reconfiguration events that mark all children that have not yet responded as unreachable, or (3) a watchdog timer for the request fires. The last case accounts for nodes that participate in the ADHT protocol but that fail at the AML level. At a virtual node, continuous probes are handled similarly as one-shot probes except that such probes are stored in the outstanding probe table for a time period of expTime specified in the probe. Thus each update for an attribute triggers re-evaluation of continuous probes for that attribute. We implement a lease-based mechanism for dynamic adaptation. A level-l virtual node for an attribute can issue the lease for levell aggregate to a parent or a child only if up is greater than l or it has leases from all its children. A virtual node at level l can issue the lease for level-k aggregate for k> l to a child only if down> k − l or if it has the lease for that aggregate from its parent. Now a probe for level-k aggregate can be answered by level-l virtual node if it has a valid lease, irrespective of the up and down values. We are currently designing different policies to decide when to issue a lease and when to revoke a lease and are also evaluating them with the above mechanism. Our current prototype does not implement access control on install, update, and probe operations but we plan to implement Astrolabe's [38] certificate-based restrictions. Also our current prototype does not restrict the resource consumption in executing the aggregation functions; but, ` techniques from research on resource management in server systems and operating systems [2, 3] can be applied here. 6. ROBUSTNESS In large scale systems, reconfigurations are common. Our two main principles for robustness are to guarantee (i) read availability--probes complete in finite time, and (ii) eventual consistency--updates by a live node will be visible to probes by connected nodes in finite time. During reconfigurations, a probe might return a stale value for two reasons. First, reconfigurations lead to incorrectness in the previous aggregate values. Second, the nodes needed for aggregation to answer the probe become unreachable. Our system also provides two hooks that applications can use for improved end-to-end robustness in the presence of reconfigurations: (1) Ondemand re-aggregation and (2) application controlled replication. Our system handles reconfigurations at two levels--adaptation at the ADHT layer to ensure connectivity and adaptation at the AML layer to ensure access to the data in SDIMS. 6.1 ADHT Adaptation Our ADHT layer adaptation algorithm is same as Pastry's adaptation algorithm [32]--the leaf sets are repaired as soon as a reconfiguration is detected and the routing table is repaired lazily. Note that maintaining extra leaf sets does not degrade the fault-tolerance property of the original Pastry; indeed, it enhances the resilience of ADHTs to failures by providing additional routing links. Due to redundancy in the leaf sets and the routing table, updates can be routed towards their root nodes successfully even during failures. Figure 7: Default lazy data re-aggregation time line Also note that the administrative isolation property satisfied by our ADHT algorithm ensures that the reconfigurations in a level i domain do not affect the probes for level i in a sibling domain. 6.2 AML Adaptation Broadly, we use two types of strategies for AML adaptation in the face of reconfigurations: (1) Replication in time as a fundamental baseline strategy, and (2) Replication in space as an additional performance optimization that falls back on replication in time when the system runs out of replicas. We provide two mechanisms for replication in time. First, lazy re-aggregation propagates already received updates to new children or new parents in a lazy fashion over time. Second, applications can reduce the probability of probe response staleness during such repairs through our flexible API with appropriate setting of the down parameter. Lazy Re-aggregation: The DHT layer informs the AML layer about reconfigurations in the network using the following three function calls--newParent, failedChild, and newChild. On newParent (parent, prefix), all probes in the outstanding-probes table corresponding to prefix are re-evaluated. If parent is not null, then aggregation functions and already existing data are lazily transferred in the background. Any new updates, installs, and probes for this prefix are sent to the parent immediately. OnfailedChild (child, prefix), the AML layer marks the child as inactive and any outstanding probes that are waiting for data from this child are re-evaluated. On newChild (child, prefix), the AML layer creates space in its data structures for this child. Figure 7 shows the time line for the default lazy re-aggregation upon reconfiguration. Probes initiated between points 1 and 2 and that are affected by reconfigurations are reevaluated by AML upon detecting the reconfiguration. Probes that complete or start between points 2 and 8 may return stale answers. On-demand Re-aggregation: The default lazy aggregation scheme lazily propagates the old updates in the system. Additionally, using up and down knobs in the Probe API, applications can force on-demand fast re-aggregation of updates to avoid staleness in the face of reconfigurations. In particular, if an application detects or suspects an answer as stale, then it can re-issue the probe increasing the up and down parameters to force the refreshing of the cached data. Note that this strategy will be useful only after the DHT adaptation is completed (Point 6 on the time line in Figure 7). Replication in Space: Replication in space is more challenging in our system than in a DHT file location application because replication in space can be achieved easily in the latter by just replicating the root node's contents. In our system, however, all internal nodes have to be replicated along with the root. In our system, applications control replication in space using up and down knobs in the Install API; with large up and down values, aggregates at the intermediate virtual nodes are propagated to more nodes in the system. By reducing the number of nodes that have to be accessed to answer a probe, applications can reduce the probability of incorrect results occurring due to the failure of nodes that do not contribute to the aggregate. For example, in a file location application, using a non-zero positive down parameter ensures that a file's global aggregate is replicated on nodes other than the root. Figure 8: Flexibility of our approach. With different UP and DOWN values in a network of 4096 nodes for different readwrite ratios. Probes for the file location can then be answered without accessing the root; hence they are not affected by the failure of the root. However, note that this technique is not appropriate in some cases. An aggregated value in file location system is valid as long as the node hosting the file is active, irrespective of the status of other nodes in the system; whereas an application that counts the number of machines in a system may receive incorrect results irrespective of the replication. If reconfigurations are only transient (like a node temporarily not responding due to a burst of load), the replicated aggregate closely or correctly resembles the current state. 7. EVALUATION We have implemented a prototype of SDIMS in Java using the FreePastry framework [32] and performed large-scale simulation experiments and micro-benchmark experiments on two real networks: 187 machines in the department and 69 machines on the PlanetLab [27] testbed. In all experiments, we use static up and down values and turn off dynamic adaptation. Our evaluation supports four main conclusions. First, flexible API provides different propagation strategies that minimize communication resources at different read-to-write ratios. For example, in our simulation we observe Update-Local to be efficient for read-to-write ratios below 0.0001, Update-Up around 1, and Update-All above 50000. Second, our system is scalable with respect to both nodes and attributes. In particular, we find that the maximum node stress in our system is an order lower than observed with an Update-All, gossiping approach. Third, in contrast to unmodified Pastry which violates path convergence property in upto 14% cases, our system conforms to the property. Fourth, the system is robust to reconfigurations and adapts to failures with in a few seconds. 7.1 Simulation Experiments Flexibility and Scalability: A major innovation of our system is its ability to provide flexible computation and propagation of aggregates. In Figure 8, we demonstrate the flexibility exposed by the aggregation API explained in Section 3. We simulate a system with 4096 nodes arranged in a domain hierarchy with branching factor (bf) of 16 and install several attributes with different up and down parameters. We plot the average number of messages per operation incurred for a wide range of read-to-write ratios of the operations for different attributes. Simulations with other sizes of networks with different branching factors reveal similar results. This graph clearly demonstrates the benefit of supporting a wide range of computation and propagation strategies. Although having a small UP Figure 9: Max node stress for a gossiping approach vs. ADHT based approach for different number of nodes with increasing number of sparse attributes. value is efficient for attributes with low read-to-write ratios (write dominated applications), the probe latency, when reads do occur, may be high since the probe needs to aggregate the data from all the nodes that did not send their aggregate up. Conversely, applications that wish to improve probe overheads or latencies can increase their UP and DOWN propagation at a potential cost of increase in write overheads. Compared to an existing Update-all single aggregation tree approach [38], scalability in SDIMS comes from (1) leveraging DHTs to form multiple aggregation trees that split the load across nodes and (2) flexible propagation that avoids propagation of all updates to all nodes. Figure 9 demonstrates the SDIMS's scalability with nodes and attributes. For this experiment, we build a simulator to simulate both Astrolabe [38] (a gossiping, Update-All approach) and our system for an increasing number of sparse attributes. Each attribute corresponds to the membership in a multicast session with a small number of participants. For this experiment, the session size is set to 8, the branching factor is set to 16, the propagation mode for SDIMS is Update-Up, and the participant nodes perform continuous probes for the global aggregate value. We plot the maximum node stress (in terms of messages) observed in both schemes for different sized networks with increasing number of sessions when the participant of each session performs an update operation. Clearly, the DHT based scheme is more scalable with respect to attributes than an Update-all gossiping scheme. Observe that at some constant number of attributes, as the number of nodes increase in the system, the maximum node stress increases in the gossiping approach, while it decreases in our approach as the load of aggregation is spread across more nodes. Simulations with other session sizes (4 and 16) yield similar results. Administrative Hierarchy and Robustness: Although the routing protocol of ADHT might lead to an increased number of hops to reach the root for a key as compared to original Pastry, the algorithm conforms to the path convergence and locality properties and thus provides administrative isolation property. In Figure 10, we quantify the increased path length by comparisons with unmodified Pastry for different sized networks with different branching factors of the domain hierarchy tree. To quantify the path convergence property, we perform simulations with a large number of probe pairs--each pair probing for a random key starting from two randomly chosen nodes. In Figure 11, we plot the percentage of probe pairs for unmodified pastry that do not conform to the path convergence property. When the branching factor is low, the domain hierarchy tree is deeper resulting in a large difference between Figure 10: Average path length to root in Pastry versus ADHT for different branching factors. Note that all lines corresponding to Pastry overlap. Figure 11: Percentage of probe pairs whose paths to the root did not conform to the path convergence property with Pastry. Figure 12: Latency of probes for aggregate at global root level with three different modes of aggregate propagation on (a) department machines, and (b) PlanetLab machines Pastry and ADHT in the average path length; but it is at these small domain sizes, that the path convergence fails more often with the original Pastry. 7.2 Testbed experiments We run our prototype on 180 department machines (some machines ran multiple node instances, so this configuration has a total of 283 SDIMS nodes) and also on 69 machines of the PlanetLab [27] testbed. We measure the performance of our system with two micro-benchmarks. In the first micro-benchmark, we install three aggregation functions of types Update-Local, Update-Up, and Update-All, perform update operation on all nodes for all three aggregation functions, and measure the latencies incurred by probes for the global aggregate from all nodes in the system. Figure 12 Figure 13: Micro-benchmark on department network showing the behavior of the probes from a single node when failures are happening at some other nodes. All 283 nodes assign a value of 10 to the attribute. Figure 14: Probe performance during failures on 69 machines of PlanetLab testbed shows the observed latencies for both testbeds. Notice that the latency in Update-Local is high compared to the Update-UP policy. This is because latency in Update-Local is affected by the presence of even a single slow machine or a single machine with a high latency network connection. In the second benchmark, we examine robustness. We install one aggregation function of type Update-Up that performs sum operation on an integer valued attribute. Each node updates the attribute with the value 10. Then we monitor the latencies and results returned on the probe operation for global aggregate on one chosen node, while we kill some nodes after every few probes. Figure 13 shows the results on the departmental testbed. Due to the nature of the testbed (machines in a department), there is little change in the latencies even in the face of reconfigurations. In Figure 14, we present the results of the experiment on PlanetLab testbed. The root node of the aggregation tree is terminated after about 275 seconds. There is a 5X increase in the latencies after the death of the initial root node as a more distant node becomes the root node after repairs. In both experiments, the values returned on probes start reflecting the correct situation within a short time after the failures. From both the testbed benchmark experiments and the simulation experiments on flexibility and scalability, we conclude that (1) the flexibility provided by SDIMS allows applications to tradeoff read-write overheads (Figure 8), read latency, and sensitivity to slow machines (Figure 12), (2) a good default aggregation strategy is Update-Up which has moderate overheads on both reads and 7.3 Applications SDIMS is designed as a general distributed monitoring and control infrastructure for a broad range of applications. Above, we discuss some simple microbenchmarks including a multicast membership service and a calculate-sum function. Van Renesse et al. [38] provide detailed examples of how such a service can be used for a peer-to-peer caching directory, a data-diffusion service, a publishsubscribe system, barrier synchronization, and voting. Additionally, we have initial experience using SDIMS to construct two significant applications: the control plane for a large-scale distributed file system [12] and a network monitor for identifying "heavy hitters" that consume excess resources. Distributed file system control: The PRACTI (Partial Replication, Arbitrary Consistency, Topology Independence) replication system provides a set of mechanisms for data replication over which arbitrary control policies can be layered. We use SDIMS to provide several key functions in order to create a file system over the lowlevel PRACTI mechanisms. First, nodes use SDIMS as a directory to handle read misses. When a node n receives an object o, it updates the (ReadDir, o) attribute with the value n; when n discards o from its local store, it resets (ReadDir, o) to NULL. At each virtual node, the ReadDir aggregation function simply selects a random non-null child value (if any) and we use the Update-Up policy for propagating updates. Finally, to locate a nearby copy of an object o, a node n1 issues a series of probe requests for the (ReadDir, o) attribute, starting with level = 1 and increasing the level value with each repeated probe request until a non-null node ID n2 is returned. n1 then sends a demand read request to n2, and n2 sends the data if it has it. Conversely, if n2 does not have a copy of o, it sends a nack to n1, and n1 issues a retry probe with the down parameter set to a value larger than used in the previous probe in order to force on-demand re-aggregation, which will yield a fresher value for the retry. Second, nodes subscribe to invalidations and updates to interest sets of files, and nodes use SDIMS to set up and maintain perinterest-set network-topology-sensitive spanning trees for propagating this information. To subscribe to invalidations for interest set i, a node n1 first updates the (Inval, i) attribute with its identity n1, and the aggregation function at each virtual node selects one non-null child value. Finally, n1 probes increasing levels of the the (Inval, i) attribute until it finds the first node n2 = 6 n1; n1 then uses n2 as its parent in the spanning tree. n1 also issues a continuous probe for this attribute at this level so that it is notified of any change to its spanning tree parent. Spanning trees for streams of pushed updates are maintained in a similar manner. In the future, we plan to use SDIMS for at least two additional services within this replication system. First, we plan to use SDIMS to track the read and write rates to different objects; prefetch algorithms will use this information to prioritize replication [40, 41]. Second, we plan to track the ranges of invalidation sequence numbers seen by each node for each interest set in order to augment the spanning trees described above with additional "hole filling" to allow nodes to locate specific invalidations they have missed. Overall, our initial experience with using SDIMS for the PRACTII replication system suggests that (1) the general aggregation interface provided by SDIMS simplifies the construction of distributed applications--given the low-level PRACTI mechanisms, we were able to construct a basic file system that uses SDIMS for several distinct control tasks in under two weeks and (2) the weak consistency guarantees provided by SDIMS meet the requirements of this application--each node's controller effectively treats information from SDIMS as hints, and if a contacted node does not have the needed data, the controller retries, using SDIMS on-demand reaggregation to obtain a fresher hint. Distributed heavy hitter problem: The goal of the heavy hitter problem is to identify network sources, destinations, or protocols that account for significant or unusual amounts of traffic. As noted by Estan et al. [13], this information is useful for a variety of applications such as intrusion detection (e.g., port scanning), denial of service detection, worm detection and tracking, fair network allocation, and network maintenance. Significant work has been done on developing high-performance stream-processing algorithms for identifying heavy hitters at one router, but this is just a first step; ideally these applications would like not just one router's views of the heavy hitters but an aggregate view. We use SDIMS to allow local information about heavy hitters to be pooled into a view of global heavy hitters. For each destination IP address IPx, a node updates the attribute (DestBW, IPx) with the number of bytes sent to IPx in the last time window. The aggregation function for attribute type DestBW is installed with the Update-UP strategy and simply adds the values from child nodes. Nodes perform continuous probe for global aggregate of the attribute and raise an alarm when the global aggregate value goes above a specified limit. Note that only nodes sending data to a particular IP address perform probes for the corresponding attribute. Also note that techniques from [25] can be extended to hierarchical case to tradeoff precision for communication bandwidth. 8. RELATED WORK The aggregation abstraction we use in our work is heavily influenced by the Astrolabe [38] project. Astrolabe adopts a PropagateAll and unstructured gossiping techniques to attain robustness [5]. However, any gossiping scheme requires aggressive replication of the aggregates. While such aggressive replication is efficient for read-dominated attributes, it incurs high message cost for attributes with a small read-to-write ratio. Our approach provides a flexible API for applications to set propagation rules according to their read-to-write ratios. Other closely related projects include Willow [39], Cone [4], DASIS [1], and SOMO [45]. Willow, DASIS and SOMO build a single tree for aggregation. Cone builds a tree per attribute and requires a total order on the attribute values. Several academic [15, 21, 42] and commercial [37] distributed monitoring systems have been designed to monitor the status of large networked systems. Some of them are centralized where all the monitoring data is collected and analyzed at a central host. Ganglia [15, 23] uses a hierarchical system where the attributes are replicated within clusters using multicast and then cluster aggregates are further aggregated along a single tree. Sophia [42] is a distributed monitoring system designed with a declarative logic programming model where the location of query execution is both explicit in the language and can be calculated during evaluation. This research is complementary to our work. TAG [21] collects information from a large number of sensors along a single tree. The observation that DHTs internally provide a scalable forest of reduction trees is not new. Plaxton et al.'s [28] original paper describes not a DHT, but a system for hierarchically aggregating and querying object location data in order to route requests to nearby copies of objects. Many systems--building upon both Plaxton's bit-correcting strategy [32, 46] and upon other strategies [24, 29, 35]--have chosen to hide this power and export a simple and general distributed hash table abstraction as a useful building block for a broad range of distributed applications. Some of these systems internally make use of the reduction forest not only for routing but also for caching [32], but for simplicity, these systems do not generally export this powerful functionality in their external interface. Our goal is to develop and expose the internal reduction forest of DHTs as a similarly general and useful abstraction. Although object location is a predominant target application for DHTs, several other applications like multicast [8, 9, 33, 36] and DNS [11] are also built using DHTs. All these systems implicitly perform aggregation on some attribute, and each one of them must be designed to handle any reconfigurations in the underlying DHT. With the aggregation abstraction provided by our system, designing and building of such applications becomes easier. Internal DHT trees typically do not satisfy domain locality properties required in our system. Castro et al. [7] and Gummadi et al. [17] point out the importance of path convergence from the perspective of achieving efficiency and investigate the performance of Pastry and other DHT algorithms, respectively. SkipNet [18] provides domain restricted routing where a key search is limited to the specified domain. This interface can be used to ensure path convergence by searching in the lowest domain and moving up to the next domain when the search reaches the root in the current domain. Although this strategy guarantees path convergence, it loses the aggregation tree abstraction property of DHTs as the domain constrained routing might touch a node more than once (as it searches forward and then backward to stay within a domain). 9. CONCLUSIONS This paper presents a Scalable Distributed Information Management System (SDIMS) that aggregates information in large-scale networked systems and that can serve as a basic building block for a broad range of applications. For large scale systems, hierarchical aggregation is a fundamental abstraction for scalability. We build our system by extending ideas from Astrolabe and DHTs to achieve (i) scalability with respect to both nodes and attributes through a new aggregation abstraction that helps leverage DHT's internal trees for aggregation, (ii) flexibility through a simple API that lets applications control propagation of reads and writes, (iii) administrative isolation through simple augmentations of current DHT algorithms, and (iv) robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.
A Scalable Distributed Information Management System * We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures. 1. INTRODUCTION The goal of this research is to design and build a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications. Monitoring, querying, and reacting to changes in the state of a distributed system are core components of applications such as system management [15, 31, 37, 42], service placement [14, 43], data sharing and caching [18, 29, 32, 35, 46], sensor monitoring and control [20, 21], multicast tree formation [8, 9, 33, 36, 38], and naming and request routing [10, 11]. We therefore speculate that a SDIMS in a networked system would provide a "distributed operating systems backbone" and facilitate the development and deployment of new distributed services. For a large scale information system, hierarchical aggregation is a fundamental abstraction for scalability. Rather than expose all information to all nodes, hierarchical aggregation allows a node to access detailed views of nearby information and summary views of global information. In a SDIMS based on hierarchical aggregation, different nodes can therefore receive different answers to the query "find a [nearby] node with at least 1 GB of free memory" or "find a [nearby] copy of file foo." A hierarchical system that aggregates information through reduction trees [21, 38] allows nodes to access information they care about while maintaining system scalability. To be used as a basic building block, a SDIMS should have four properties. First, the system should be scalable: it should accommodate large numbers of participating nodes, and it should allow applications to install and monitor large numbers of data attributes. Enterprise and global scale systems today might have tens of thousands to millions of nodes and these numbers will increase over time. Similarly, we hope to support many applications, and each application may track several attributes (e.g., the load and free memory of a system's machines) or millions of attributes (e.g., which files are stored on which machines). Second, the system should have flexibility to accommodate a broad range of applications and attributes. For example, readdominated attributes like numCPUs rarely change in value, while write-dominated attributes like numProcesses change quite often. An approach tuned for read-dominated attributes will consume high bandwidth when applied to write-dominated attributes. Conversely, an approach tuned for write-dominated attributes will suffer from unnecessary query latency or imprecision for read-dominated attributes. Therefore, a SDIMS should provide mechanisms to handle different types of attributes and leave the policy decision of tuning replication to the applications. Third, a SDIMS should provide administrative isolation. In a large system, it is natural to arrange nodes in an organizational or an administrative hierarchy. A SDIMS should support administra tive isolation in which queries about an administrative domain's information can be satisfied within the domain so that the system can operate during disconnections from other domains, so that an external observer cannot monitor or affect intra-domain queries, and to support domain-scoped queries efficiently. Fourth, the system must be robust to node failures and disconnections. A SDIMS should adapt to reconfigurations in a timely fashion and should also provide mechanisms so that applications can tradeoff the cost of adaptation with the consistency level in the aggregated results when reconfigurations occur. We draw inspiration from two previous works: Astrolabe [38] and Distributed Hash Tables (DHTs). Astrolabe [38] is a robust information management system. Astrolabe provides the abstraction of a single logical aggregation tree that mirrors a system's administrative hierarchy. It provides a general interface for installing new aggregation functions and provides eventual consistency on its data. Astrolabe is robust due to its use of an unstructured gossip protocol for disseminating information and its strategy of replicating all aggregated attribute values for a subtree to all nodes in the subtree. This combination allows any communication pattern to yield eventual consistency and allows any node to answer any query using local information. This high degree of replication, however, may limit the system's ability to accommodate large numbers of attributes. Also, although the approach works well for read-dominated attributes, an update at one node can eventually affect the state at all nodes, which may limit the system's flexibility to support write-dominated attributes. Recent research in peer-to-peer structured networks resulted in Distributed Hash Tables (DHTs) [18, 28, 29, 32, 35, 46]--a data structure that scales with the number of nodes and that distributes the read-write load for different queries among the participating nodes. It is interesting to note that although these systems export a global hash table abstraction, many of them internally make use of what can be viewed as a scalable system of aggregation trees to, for example, route a request for a given key to the right DHT node. Indeed, rather than export a general DHT interface, Plaxton et al.'s [28] original application makes use of hierarchical aggregation to allow nodes to locate nearby copies of objects. It seems appealing to develop a SDIMS abstraction that exposes this internal functionality in a general way so that scalable trees for aggregation can be a basic system building block alongside the DHTs. At a first glance, it might appear to be obvious that simply fusing DHTs with Astrolabe's aggregation abstraction will result in a SDIMS. However, meeting the SDIMS requirements forces a design to address four questions: (1) How to scalably map different attributes to different aggregation trees in a DHT mesh? (2) How to provide flexibility in the aggregation to accommodate different application requirements? (3) How to adapt a global, flat DHT mesh to attain administrative isolation property? and (4) How to provide robustness without unstructured gossip and total replication? The key contributions of this paper that form the foundation of our SDIMS design are as follows. 1. We define a new aggregation abstraction that specifies both attribute type and attribute name and that associates an aggregation function with a particular attribute type. This abstraction paves the way for utilizing the DHT system's internal trees for aggregation and for achieving scalability with both nodes and attributes. 2. We provide a flexible API that lets applications control the propagation of reads and writes and thus trade off update cost, read latency, replication, and staleness. 3. We augment an existing DHT algorithm to ensure path convergence and path locality properties in order to achieve administrative isolation. 4. We provide robustness to node and network reconfigurations by (a) providing temporal replication through lazy reaggre gation that guarantees eventual consistency and (b) ensuring that our flexible API allows demanding applications gain additional robustness by using tunable spatial replication of data aggregates or by performing fast on-demand reaggregation to augment the underlying lazy reaggregation or by doing both. We have built a prototype of SDIMS. Through simulations and micro-benchmark experiments on a number of department machines and PlanetLab [27] nodes, we observe that the prototype achieves scalability with respect to both nodes and attributes through use of its flexible API, inflicts an order of magnitude lower maximum node stress than unstructured gossiping schemes, achieves isolation properties at a cost of modestly increased read latency compared to flat DHTs, and gracefully handles node failures. This initial study discusses key aspects of an ongoing system building effort, but it does not address all issues in building a SDIMS. For example, we believe that our strategies for providing robustness will mesh well with techniques such as supernodes [22] and other ongoing efforts to improve DHTs [30] for further improving robustness. Also, although splitting aggregation among many trees improves scalability for simple queries, this approach may make complex and multi-attribute queries more expensive compared to a single tree. Additional work is needed to understand the significance of this limitation for real workloads and, if necessary, to adapt query planning techniques from DHT abstractions [16, 19] to scalable aggregation tree abstractions. In Section 2, we explain the hierarchical aggregation abstraction that SDIMS provides to applications. In Sections 3 and 4, we describe the design of our system for achieving the flexibility, scalability, and administrative isolation requirements of a SDIMS. In Section 5, we detail the implementation of our prototype system. Section 6 addresses the issue of adaptation to the topological reconfigurations. In Section 7, we present the evaluation of our system through large-scale simulations and microbenchmarks on real networks. Section 8 details the related work, and Section 9 summarizes our contribution. 2. AGGREGATION ABSTRACTION 3. FLEXIBILITY 3.1 Aggregation API 3.1.1 Install 3.1.2 Update 3.1.3 Probe 3.1.4 Dynamic Adaptation 4. SCALABILITY 4.1 Leveraging DHTs 4.2 Administrative Isolation 5. PROTOTYPE IMPLEMENTATION 6. ROBUSTNESS 6.1 ADHT Adaptation 6.2 AML Adaptation 7. EVALUATION 7.1 Simulation Experiments 7.2 Testbed experiments 7.3 Applications 8. RELATED WORK The aggregation abstraction we use in our work is heavily influenced by the Astrolabe [38] project. Astrolabe adopts a PropagateAll and unstructured gossiping techniques to attain robustness [5]. However, any gossiping scheme requires aggressive replication of the aggregates. While such aggressive replication is efficient for read-dominated attributes, it incurs high message cost for attributes with a small read-to-write ratio. Our approach provides a flexible API for applications to set propagation rules according to their read-to-write ratios. Other closely related projects include Willow [39], Cone [4], DASIS [1], and SOMO [45]. Willow, DASIS and SOMO build a single tree for aggregation. Cone builds a tree per attribute and requires a total order on the attribute values. Several academic [15, 21, 42] and commercial [37] distributed monitoring systems have been designed to monitor the status of large networked systems. Some of them are centralized where all the monitoring data is collected and analyzed at a central host. Ganglia [15, 23] uses a hierarchical system where the attributes are replicated within clusters using multicast and then cluster aggregates are further aggregated along a single tree. Sophia [42] is a distributed monitoring system designed with a declarative logic programming model where the location of query execution is both explicit in the language and can be calculated during evaluation. This research is complementary to our work. TAG [21] collects information from a large number of sensors along a single tree. The observation that DHTs internally provide a scalable forest of reduction trees is not new. Plaxton et al.'s [28] original paper describes not a DHT, but a system for hierarchically aggregating and querying object location data in order to route requests to nearby copies of objects. Many systems--building upon both Plaxton's bit-correcting strategy [32, 46] and upon other strategies [24, 29, 35]--have chosen to hide this power and export a simple and general distributed hash table abstraction as a useful building block for a broad range of distributed applications. Some of these systems internally make use of the reduction forest not only for routing but also for caching [32], but for simplicity, these systems do not generally export this powerful functionality in their external interface. Our goal is to develop and expose the internal reduction forest of DHTs as a similarly general and useful abstraction. Although object location is a predominant target application for DHTs, several other applications like multicast [8, 9, 33, 36] and DNS [11] are also built using DHTs. All these systems implicitly perform aggregation on some attribute, and each one of them must be designed to handle any reconfigurations in the underlying DHT. With the aggregation abstraction provided by our system, designing and building of such applications becomes easier. Internal DHT trees typically do not satisfy domain locality properties required in our system. Castro et al. [7] and Gummadi et al. [17] point out the importance of path convergence from the perspective of achieving efficiency and investigate the performance of Pastry and other DHT algorithms, respectively. SkipNet [18] provides domain restricted routing where a key search is limited to the specified domain. This interface can be used to ensure path convergence by searching in the lowest domain and moving up to the next domain when the search reaches the root in the current domain. Although this strategy guarantees path convergence, it loses the aggregation tree abstraction property of DHTs as the domain constrained routing might touch a node more than once (as it searches forward and then backward to stay within a domain). 9. CONCLUSIONS This paper presents a Scalable Distributed Information Management System (SDIMS) that aggregates information in large-scale networked systems and that can serve as a basic building block for a broad range of applications. For large scale systems, hierarchical aggregation is a fundamental abstraction for scalability. We build our system by extending ideas from Astrolabe and DHTs to achieve (i) scalability with respect to both nodes and attributes through a new aggregation abstraction that helps leverage DHT's internal trees for aggregation, (ii) flexibility through a simple API that lets applications control propagation of reads and writes, (iii) administrative isolation through simple augmentations of current DHT algorithms, and (iv) robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.
A Scalable Distributed Information Management System * We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures. 1. INTRODUCTION The goal of this research is to design and build a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications. We therefore speculate that a SDIMS in a networked system would provide a "distributed operating systems backbone" and facilitate the development and deployment of new distributed services. For a large scale information system, hierarchical aggregation is a fundamental abstraction for scalability. Rather than expose all information to all nodes, hierarchical aggregation allows a node to access detailed views of nearby information and summary views of global information. In a SDIMS based on hierarchical aggregation, different nodes can therefore receive different answers to the query "find a [nearby] node with at least 1 GB of free memory" or "find a [nearby] copy of file foo." A hierarchical system that aggregates information through reduction trees [21, 38] allows nodes to access information they care about while maintaining system scalability. To be used as a basic building block, a SDIMS should have four properties. First, the system should be scalable: it should accommodate large numbers of participating nodes, and it should allow applications to install and monitor large numbers of data attributes. Enterprise and global scale systems today might have tens of thousands to millions of nodes and these numbers will increase over time. Similarly, we hope to support many applications, and each application may track several attributes (e.g., the load and free memory of a system's machines) or millions of attributes (e.g., which files are stored on which machines). Second, the system should have flexibility to accommodate a broad range of applications and attributes. For example, readdominated attributes like numCPUs rarely change in value, while write-dominated attributes like numProcesses change quite often. An approach tuned for read-dominated attributes will consume high bandwidth when applied to write-dominated attributes. Conversely, an approach tuned for write-dominated attributes will suffer from unnecessary query latency or imprecision for read-dominated attributes. Therefore, a SDIMS should provide mechanisms to handle different types of attributes and leave the policy decision of tuning replication to the applications. Third, a SDIMS should provide administrative isolation. In a large system, it is natural to arrange nodes in an organizational or an administrative hierarchy. A SDIMS should support administra Fourth, the system must be robust to node failures and disconnections. A SDIMS should adapt to reconfigurations in a timely fashion and should also provide mechanisms so that applications can tradeoff the cost of adaptation with the consistency level in the aggregated results when reconfigurations occur. We draw inspiration from two previous works: Astrolabe [38] and Distributed Hash Tables (DHTs). Astrolabe [38] is a robust information management system. Astrolabe provides the abstraction of a single logical aggregation tree that mirrors a system's administrative hierarchy. It provides a general interface for installing new aggregation functions and provides eventual consistency on its data. Astrolabe is robust due to its use of an unstructured gossip protocol for disseminating information and its strategy of replicating all aggregated attribute values for a subtree to all nodes in the subtree. This combination allows any communication pattern to yield eventual consistency and allows any node to answer any query using local information. This high degree of replication, however, may limit the system's ability to accommodate large numbers of attributes. Also, although the approach works well for read-dominated attributes, an update at one node can eventually affect the state at all nodes, which may limit the system's flexibility to support write-dominated attributes. It is interesting to note that although these systems export a global hash table abstraction, many of them internally make use of what can be viewed as a scalable system of aggregation trees to, for example, route a request for a given key to the right DHT node. Indeed, rather than export a general DHT interface, Plaxton et al.'s [28] original application makes use of hierarchical aggregation to allow nodes to locate nearby copies of objects. It seems appealing to develop a SDIMS abstraction that exposes this internal functionality in a general way so that scalable trees for aggregation can be a basic system building block alongside the DHTs. At a first glance, it might appear to be obvious that simply fusing DHTs with Astrolabe's aggregation abstraction will result in a SDIMS. However, meeting the SDIMS requirements forces a design to address four questions: (1) How to scalably map different attributes to different aggregation trees in a DHT mesh? (2) How to provide flexibility in the aggregation to accommodate different application requirements? (3) How to adapt a global, flat DHT mesh to attain administrative isolation property? and (4) How to provide robustness without unstructured gossip and total replication? The key contributions of this paper that form the foundation of our SDIMS design are as follows. 1. We define a new aggregation abstraction that specifies both attribute type and attribute name and that associates an aggregation function with a particular attribute type. This abstraction paves the way for utilizing the DHT system's internal trees for aggregation and for achieving scalability with both nodes and attributes. 2. We provide a flexible API that lets applications control the propagation of reads and writes and thus trade off update cost, read latency, replication, and staleness. 3. We augment an existing DHT algorithm to ensure path convergence and path locality properties in order to achieve administrative isolation. 4. We provide robustness to node and network reconfigurations by (a) providing temporal replication through lazy reaggre We have built a prototype of SDIMS. This initial study discusses key aspects of an ongoing system building effort, but it does not address all issues in building a SDIMS. Also, although splitting aggregation among many trees improves scalability for simple queries, this approach may make complex and multi-attribute queries more expensive compared to a single tree. Additional work is needed to understand the significance of this limitation for real workloads and, if necessary, to adapt query planning techniques from DHT abstractions [16, 19] to scalable aggregation tree abstractions. In Section 2, we explain the hierarchical aggregation abstraction that SDIMS provides to applications. In Sections 3 and 4, we describe the design of our system for achieving the flexibility, scalability, and administrative isolation requirements of a SDIMS. In Section 5, we detail the implementation of our prototype system. Section 6 addresses the issue of adaptation to the topological reconfigurations. In Section 7, we present the evaluation of our system through large-scale simulations and microbenchmarks on real networks. Section 8 details the related work, and Section 9 summarizes our contribution. 8. RELATED WORK The aggregation abstraction we use in our work is heavily influenced by the Astrolabe [38] project. Astrolabe adopts a PropagateAll and unstructured gossiping techniques to attain robustness [5]. However, any gossiping scheme requires aggressive replication of the aggregates. While such aggressive replication is efficient for read-dominated attributes, it incurs high message cost for attributes with a small read-to-write ratio. Our approach provides a flexible API for applications to set propagation rules according to their read-to-write ratios. Willow, DASIS and SOMO build a single tree for aggregation. Cone builds a tree per attribute and requires a total order on the attribute values. Several academic [15, 21, 42] and commercial [37] distributed monitoring systems have been designed to monitor the status of large networked systems. Ganglia [15, 23] uses a hierarchical system where the attributes are replicated within clusters using multicast and then cluster aggregates are further aggregated along a single tree. Sophia [42] is a distributed monitoring system designed with a declarative logic programming model where the location of query execution is both explicit in the language and can be calculated during evaluation. This research is complementary to our work. TAG [21] collects information from a large number of sensors along a single tree. The observation that DHTs internally provide a scalable forest of reduction trees is not new. Plaxton et al.'s [28] original paper describes not a DHT, but a system for hierarchically aggregating and querying object location data in order to route requests to nearby copies of objects. Our goal is to develop and expose the internal reduction forest of DHTs as a similarly general and useful abstraction. All these systems implicitly perform aggregation on some attribute, and each one of them must be designed to handle any reconfigurations in the underlying DHT. With the aggregation abstraction provided by our system, designing and building of such applications becomes easier. Internal DHT trees typically do not satisfy domain locality properties required in our system. SkipNet [18] provides domain restricted routing where a key search is limited to the specified domain. Although this strategy guarantees path convergence, it loses the aggregation tree abstraction property of DHTs as the domain constrained routing might touch a node more than once (as it searches forward and then backward to stay within a domain). 9. CONCLUSIONS This paper presents a Scalable Distributed Information Management System (SDIMS) that aggregates information in large-scale networked systems and that can serve as a basic building block for a broad range of applications. For large scale systems, hierarchical aggregation is a fundamental abstraction for scalability.
J-40
Networks Preserving Evolutionary Equilibria and the Power of Randomization
We study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. We generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.
[ "network", "evolutionari game theori", "game theori", "pairwis interact", "undirect graph", "evolutionari stabl strategi", "edg densiti condit", "mutat set", "natur strengthen", "nash equilibrium", "random power", "geograph restrict", "graph topolog", "equilibrium outcom", "topolog relationship", "graph-theoret model" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "R", "M", "M", "U", "U", "U" ]
Networks Preserving Evolutionary Equilibria and the Power of Randomization Michael Kearns mkearns@cis.upenn.edu Siddharth Suri ssuri@cis.upenn.edu Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 ABSTRACT We study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. We generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics, Theory 1. INTRODUCTION In this paper, we introduce and examine a natural extension of classical evolutionary game theory (EGT) to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. This extension generalizes the classical setting, in which all pairs of organisms in an infinite population are equally likely to interact. The classical setting can be viewed as the special case in which the underlying network is a clique. There are many obvious reasons why one would like to examine more general graphs, the primary one being in that many scenarios considered in evolutionary game theory, all interactions are in fact not possible. For example, geographical restrictions may limit interactions to physically proximate pairs of organisms. More generally, as evolutionary game theory has become a plausible model not only for biological interaction, but also economic and other kinds of interaction in which certain dynamics are more imitative than optimizing (see [2, 16] and chapter 4 of [19]), the network constraints may come from similarly more general sources. Evolutionary game theory on networks has been considered before, but not in the generality we will do so here (see Section 4). We generalize the definition of an evolutionary stable strategy (ESS) to networks, and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. The work described here is part of recent efforts examining the relationship between graph topology or structure and properties of equilibrium outcomes. Previous works in this line include studies of the relationship of topology to properties of correlated equilibria in graphical games [11], and studies of price variation in graph-theoretic market exchange models [12]. More generally, this work contributes to the line of graph-theoretic models for game theory investigated in both computer science [13] and economics [10]. 2. CLASSICAL EGT The fundamental concept of evolutionary game theory is the evolutionarily stable strategy (ESS). Intuitively, an ESS is a strategy such that if all the members of a population adopt it, then no mutant strategy could invade the population [17]. To make this more precise, we describe the basic model of evolutionary game theory, in which the notion of an ESS resides. The standard model of evolutionary game theory considers an infinite population of organisms, each of which plays a strategy in a fixed, 2-player, symmetric game. The game is defined by a fitness function F. All pairs of members of the infinite population are equally likely to interact with one another. If two organisms interact, one playing strategy s 200 and the other playing strategy t, the s-player earns a fitness of F(s|t) while the t-player earns a fitness of F(t|s). In this infinite population of organisms, suppose there is a 1 − fraction who play strategy s, and call these organisms incumbents; and suppose there is an fraction who play t, and call these organisms mutants. Assume two organisms are chosen uniformly at random to play each other. The strategy s is an ESS if the expected fitness of an organism playing s is higher than that of an organism playing t, for all t = s and all sufficiently small . Since an incumbent will meet another incumbent with probability 1 − and it will meet a mutant with probability , we can calculate the expected fitness of an incumbent, which is simply (1 − )F(s|s) + F(s|t). Similarly, the expected fitness of a mutant is (1 − )F(t|s) + F(t|t). Thus we come to the formal definition of an ESS [19]. Definition 2.1. A strategy s is an evolutionarily stable strategy (ESS) for the 2-player, symmetric game given by fitness function F, if for every strategy t = s, there exists an t such that for all 0 < < t, (1 − )F(s|s) + F(s|t) > (1 − )F(t|s) + F(t|t). A consequence of this definition is that for s to be an ESS, it must be the case that F(s|s) ≥ F(t|s), for all strategies t. This inequality means that s must be a best response to itself, and thus any ESS strategy s must also be a Nash equilibrium. In general the notion of ESS is more restrictive than Nash equilibrium, and not all 2-player, symmetric games have an ESS. In this paper our interest is to examine what kinds of network structure preserve the ESS strategies for those games that do have a standard ESS. First we must of course generalize the definition of ESS to a network setting. 3. EGT ON GRAPHS In our setting, we will no longer assume that two organisms are chosen uniformly at random to interact. Instead, we assume that organisms interact only with those in their local neighborhood, as defined by an undirected graph or network. As in the classical setting (which can be viewed as the special case of the complete network or clique), we shall assume an infinite population, by which we mean we examine limiting behavior in a family of graphs of increasing size. Before giving formal definitions, some comments are in order on what to expect in moving from the classical to the graph-theoretic setting. In the classical (complete graph) setting, there exist many symmetries that may be broken in moving to the the network setting, at both the group and individual level. Indeed, such asymmetries are the primary interest in examining a graph-theoretic generalization. For example, at the group level, in the standard ESS definition, one need not discuss any particular set of mutants of population fraction . Since all organisms are equally likely to interact, the survival or fate of any specific mutant set is identical to that of any other. In the network setting, this may not be true: some mutant sets may be better able to survive than others due to the specific topologies of their interactions in the network. For instance, foreshadowing some of our analysis, if s is an ESS but F(t|t) is much larger than F(s|s) and F(s|t), a mutant set with a great deal of internal interaction (that is, edges between mutants) may be able to survive, whereas one without this may suffer. At the level of individuals, in the classical setting, the assertion that one mutant dies implies that all mutants die, again by symmetry. In the network setting, individual fates may differ within a group all playing a common strategy. These observations imply that in examining ESS on networks we face definitional choices that were obscured in the classical model. If G is a graph representing the allowed pairwise interactions between organisms (vertices), and u is a vertex of G playing strategy su, then the fitness of u is given by F(u) = P v∈Γ(u) F(su|sv) |Γ(u)| . Here sv is the strategy being played by the neighbor v, and Γ(u) = {v ∈ V : (u, v) ∈ E}. One can view the fitness of u as the average fitness u would obtain if it played each if its neighbors, or the expected fitness u would obtain if it were assigned to play one of its neighbors chosen uniformly at random. Classical evolutionary game theory examines an infinite, symmetric population. Graphs or networks are inherently finite objects, and we are specifically interested in their asymmetries, as discussed above. Thus all of our definitions shall revolve around an infinite family G = {Gn}∞ n=0 of finite graphs Gn over n vertices, but we shall examine asymptotic (large n) properties of such families. We first give a definition for a family of mutant vertex sets in such an infinite graph family to contract. Definition 3.1. Let G = {Gn}∞ n=0 be an infinite family of graphs, where Gn has n vertices. Let M = {Mn}∞ n=0 be any family of subsets of vertices of the Gn such that |Mn| ≥ n for some constant > 0. Suppose all the vertices of Mn play a common (mutant) strategy t, and suppose the remaining vertices in Gn play a common (incumbent) strategy s. We say that Mn contracts if for sufficiently large n, for all but o(n) of the j ∈ Mn, j has an incumbent neighbor i such that F(j) < F(i). A reasonable alternative would be to ask that the condition above hold for all mutants rather than all but o(n). Note also that we only require that a mutant have one incumbent neighbor of higher fitness in order to die; one might considering requiring more. In Sections 6.1 and 6.2 we consider these stronger conditions and demonstrate that our results can no longer hold. In order to properly define an ESS for an infinite family of finite graphs in a way that recovers the classical definition asymptotically in the case of the family of complete graphs, we first must give a definition that restricts attention to families of mutant vertices that are smaller than some invasion threshold n, yet remain some constant fraction of the population. This prevents invasions that survive merely by constituting a vanishing fraction of the population. Definition 3.2. Let > 0, and let G = {Gn}∞ n=0 be an infinite family of graphs, where Gn has n vertices. Let M = {Mn}∞ n=0 be any family of (mutant) vertices in Gn. We say that M is -linear if there exists an , > > 0, such that for all sufficiently large n, n > |Mn| > n. We can now give our definition for a strategy to be evolutionarily stable when employed by organisms interacting with their neighborhood in a graph. 201 Definition 3.3. Let G = {Gn}∞ n=0 be an infinite family of graphs, where Gn has n vertices. Let F be any 2-player, symmetric game for which s is a strategy. We say that s is an ESS with respect to F and G if for all mutant strategies t = s, there exists an t > 0 such that for any t-linear family of mutant vertices M = {Mn}∞ n=0 all playing t, for n sufficiently large, Mn contracts. Thus, to violate the ESS property for G, one must witness a family of mutations M in which each Mn is an arbitrarily small but nonzero constant fraction of the population of Gn, but does not contract (i.e. every mutant set has a subset of linear size that survives all of its incumbent interactions). In Section A.1 we show that the definition given coincides with the classical one in the case where G is the family of complete graphs, in the limit of large n. We note that even in the classical model, small sets of mutants were allowed to have greater fitness than the incumbents, as long as the size of the set was o(n) [18]. In the definition above there are three parameters: the game F, the graph family G and the mutation family M. Our main results will hold for any 2-player, symmetric game F. We will also study two rather general settings for G and M: that in which G is a family of random graphs and M is arbitrary, and that in which G is nearly arbitrary and M is randomly chosen. In both cases, we will see that, subject to conditions on degree or edge density (essentially forcing connectivity of G but not much more), for any 2-player, symmetric game, the ESS of the classical settings, and only those strategies, are always preserved. Thus a common theme of these results is the power of randomization: as long as either the network itself is chosen randomly, or the mutation set is chosen randomly, classical ESS are preserved. 4. RELATED WORK There has been previous work that analyzes which strategies are resilient to mutant invasions with respect to various types of graphs. What sets our work apart is that the model we consider encompasses a significantly more general class of games and graph topologies. We will briefly survey this literature and point out the differences in the previous models and ours. In [8], [3], and [4], the authors consider specific families of graphs, such as cycles and lattices, where players play specific games, such as 2 × 2-games or k × k-coordination games. In these papers the authors specify a simple, local dynamic for players to improve their payoffs by changing strategies, and analyze what type of strategies will grow to dominate the population. The model we propose is more general than both of these, as it encompasses a larger class of graphs as well as a richer set of games. Also related to our work is that of [14], where the authors propose two models. The first assumes organisms interact according to a weighted, undirected graph. However, the fitness of each organism is simply assigned and does not depend on the actions of each organism``s neighborhood. The second model has organisms arranged around a directed cycle, where neighbors play a 2 × 2-game. With probability proportional to its fitness, an organism is chosen to reproduce by placing a replica of itself in its neighbors position, thereby killing the neighbor. We consider more general games than the first model and more general graphs than the second. Finally, the works most closely related to ours are [7], [15], and [6]. The authors consider 2-action, coordination games played by players in a general undirected graph. In these three works, the authors specify a dynamic for a strategy to reproduce, and analyze properties of the graph that allow a strategy to overrun the population. Here again, one can see that our model is more general than these, as it allows for organisms to play any 2-player, symmetric game. 5. NETWORKS PRESERVING ESS We now proceed to state and prove two complementary results in the network ESS model defined in Section 3. First, we consider a setting where the graphs are generated via the Gn,p model of Erd˝os and R´enyi [5]. In this model, every pair of vertices are joined by an edge independently and with probability p (where p may depend on n). The mutant set, however, will be constructed adversarially (subject to the linear size constraint given by Definition 3.3). For these settings, we show that for any 2-player, symmetric game, s is a classical ESS of that game, if and only if s is an ESS for {Gn,p}∞ n=0, where p = Ω(1/nc ) and 0 ≤ c < 1, and any mutant family {Mn}∞ n=0, where each Mn has linear size. We note that under these settings, if we let c = 1 − γ for small γ > 0, the expected number of edges in Gn is n1+γ or larger - that is, just superlinear in the number of vertices and potentially far smaller than O(n2 ). It is easy to convince oneself that once the graphs have only a linear number of edges, we are flirting with disconnectedness, and there may simply be large mutant sets that can survive in isolation due to the lack of any incumbent interactions in certain games. Thus in some sense we examine the minimum plausible edge density. The second result is a kind of dual to the first, considering a setting where the graphs are chosen arbitrarily (subject to conditions) but the mutant sets are chosen randomly. It states that for any 2-player, symmetric game, s is a classical ESS for that game, if and only if s is an ESS for any {Gn = (Vn, En)}∞ n=0 in which for all v ∈ Vn, deg(v) = Ω(nγ ) (for any constant γ > 0), and a family of mutant sets {Mn}∞ n=0, that is chosen randomly (that is, in which each organism is labeled a mutant with constant probability > 0). Thus, in this setting we again find that classical ESS are preserved subject to edge density restrictions. Since the degree assumption is somewhat strong, we also prove another result which only assumes that |En| ≥ n1+γ , and shows that there must exist at least 1 mutant with an incumbent neighbor of higher fitness (as opposed to showing that all but o(n) mutants have an incumbent neighbor of higher fitness). As will be discussed, this rules out stationary mutant invasions. 5.1 Random Graphs, Adversarial Mutations Now we state and prove a theorem which shows that if s is a classical ESS, then s will be an ESS for random graphs, where a linear sized set of mutants is chosen by an adversary. Theorem 5.1. Let F be any 2-player, symmetric game, and suppose s is a classical ESS of F. Let the infinite graph family {Gn}∞ n=0 be drawn according to Gn,p, where p = Ω(1/nc ) and 0 ≤ c < 1. Then with probability 1, s is an ESS. The main idea of the proof is to divide mutants into 2 categories, those with normal fitness and those with ab202 normal fitness. First, we show all but o(n) of the population (incumbent or mutant) have an incumbent neighbor of normal fitness. This will imply that all but o(n) of the mutants of normal fitness have an incumbent neighbor of higher fitness. The vehicle for proving this is Theorem 2.15 of [5], which gives an upper bound on the number of vertices not connected to a sufficiently large set. This theorem assumes that the size of this large set is known with equality, which necessitates the union bound argument below. Secondly, we show that there can be at most o(n) mutants with abnormal fitness. Since there are so few of them, even if none of them have an incumbent neighbor of higher fitness, s will still be an ESS with respect to F and G. Proof. (Sketch) Let t = s be the mutant strategy. Since s is a classical ESS, there exists an t such that (1− )F(s|s)+ F(s|t) > (1 − )F(t|s) + F(t|t), for all 0 < < t. Let M be any mutant family that is t-linear. Thus for any fixed value of n that is sufficiently large, there exists an such that |Mn| = n and t > > 0. Also, let In = Vn \ Mn and let I ⊆ In be the set of incumbents that have fitness in the range (1 ± τ)[(1 − )F(s|s) + F(s|t)] for some constant τ, 0 < τ < 1/6. Lemma 5.1 below shows (1 − )n ≥ |I | ≥ (1 − )n − 24 log n τ2p . Finally, let TI = {x ∈ V \ I : Γ(x) ∩ I = ∅}. (For the sake of clarity we suppress the subscript n on the sets I and T.) The union bound gives us Pr(|TI | ≥ δn) ≤ (1− )n X i=(1− )n− 24 log n τ2p Pr(|TI | ≥ δn and |I | = i) (1) Letting δ = n−γ for some γ > 0 gives δn = o(n). We will apply Theorem 2.15 of [5] to the summand on the right hand side of Equation 1. If we let γ = (1−c)/2, and combine this with the fact that 0 ≤ c < 1, all of the requirements of this theorem will be satisfied (details omitted). Now when we apply this theorem to Equation 1, we get Pr(|TI | ≥ δn) ≤ (1− )n X i=(1− )n− 24 log n τ2p exp „ − 1 6 Cδn `` (2) = o(1) This is because equation 2 has only 24 log n τ2p terms, and Theorem 2.15 of [5] gives us that C ≥ (1 − )n1−c − 24 log n τ2 . Thus we have shown, with probability tending to 1 as n → ∞, at most o(n) individuals are not attached to an incumbent which has fitness in the range (1 ± τ)[(1 − )F(s|s) + F(s|t)]. This implies that the number of mutants of approximately normal fitness, not attached to an incumbent of approximately normal fitness, is also o(n). Now those mutants of approximately normal fitness that are attached to an incumbent of approximately normal fitness have fitness in the range (1±τ)[(1− )F(t|s)+ F(t|t)]. The incumbents that they are attached to have fitness in the range (1±τ)[(1− )F(s|s)+ F(s|t)]. Since s is an ESS of F, we know (1− )F(s|s)+ F(s|t) > (1− )F(t|s)+ F(t|t), thus if we choose τ small enough, we can ensure that all but o(n) mutants of normal fitness have a neighboring incumbent of higher fitness. Finally by Lemma 5.1, we know there are at most o(n) mutants of abnormal fitness. So even if all of them are more fit than their respective incumbent neighbors, we have shown all but o(n) of the mutants have an incumbent neighbor of higher fitness. We now state and prove the lemma used in the proof above. Lemma 5.1. For almost every graph Gn,p with (1 − )n incumbents, all but 24 log n δ2p incumbents have fitness in the range (1±δ)[(1− )F(s|s)+ F(s|t)], where p = Ω(1/nc ) and , δ and c are constants satisfying 0 < < 1, 0 < δ < 1/6, 0 ≤ c < 1. Similarly, under the same assumptions, all but 24 log n δ2p mutants have fitness in the range (1 ± δ)[(1 − )F(t|s) + F(t|t)]. Proof. We define the mutant degree of a vertex to be the number of mutant neighbors of that vertex, and incumbent degree analogously. Observe that the only way for an incumbent to have fitness far from its expected value of (1− )F(s|s)+ F(s|t) is if it has a fraction of mutant neighbors either much higher or much lower than . Theorem 2.14 of [5] gives us a bound on the number of such incumbents. It states that the number of incumbents with mutant degree outside the range (1 ± δ)p|M| is at most 12 log n δ2p . By the same theorem, the number of incumbents with incumbent degree outside the range (1 ± δ)p|I| is at most 12 log n δ2p . From the linearity of fitness as a function of the fraction of mutant or incumbent neighbors, one can show that for those incumbents with mutant and incumbent degree in the expected range, their fitness is within a constant factor of (1 − )F(s|s) + F(s|t), where that constant goes to 1 as n tends to infinity and δ tends to 0. The proof for the mutant case is analogous. We note that if in the statement of Theorem 5.1 we let c = 0, then p = 1. This, in turn, makes G = {Kn}∞ n=0, where Kn is a clique of n vertices. Then for any Kn all of the incumbents will have identical fitness and all of the mutants will have identical fitness. Furthermore, since s was an ESS for G, the incumbent fitness will be higher than the mutant fitness. Finally, one can show that as n → ∞, the incumbent fitness converges to (1 − )F(s|s) + F(s|t), and the mutant fitness converges to (1 − )F(t|s) + F(t|t). In other words, s must be a classical ESS, providing a converse to Theorem 5.1. We rigorously present this argument in Section A.1. 5.2 Adversarial Graphs, Random Mutations We now move on to our second main result. Here we show that if the graph family, rather than being chosen randomly, is arbitrary subject to a minimum degree requirement, and the mutation sets are randomly chosen, classical ESS are again preserved. A modified notion of ESS allows us to considerably weaken the degree requirement to a minimum edge density requirement. Theorem 5.2. Let G = {Gn = (Vn, En)}∞ n=0 be an infinite family of graphs in which for all v ∈ Vn, deg(v) = Ω(nγ ) (for any constant γ > 0). Let F be any 2-player, symmetric game, and suppose s is a classical ESS of F. Let t be any mutant strategy, and let the mutant family M = {Mn}∞ n=0 be chosen randomly by labeling each vertex a mutant with constant probability , where t > > 0. Then with probability 1, s is an ESS with respect to F, G and M. 203 Proof. Let t = s be the mutant strategy and let X be the event that every incumbent has fitness within the range (1 ± τ)[(1 − )F(s|s) + F(s|t)], for some constant τ > 0 to be specified later. Similarly, let Y be the event that every mutant has fitness within the range (1 ± τ)[(1 − )F(t|s) + F(t|t)]. Since Pr(X ∩ Y ) = 1 − Pr(¬X ∪ ¬Y ), we proceed by showing Pr(¬X ∪ ¬Y ) = o(1). ¬X is the event that there exists an incumbent with fitness outside the range (1±τ)[(1− )F(s|s)+ F(s|t)]. If degM (v) denotes the number of mutant neighbors of v, similarly, degI (v) denotes the number of incumbent neighbors of v, then an incumbent i has fitness degI (i) deg(i) F(s|s)+ degM (i) deg(i) F(s|t). Since F(s|s) and F(s|t) are fixed quantities, the only variation in an incumbents fitness can come from variation in the terms degI (i) deg(i) and degM (i) deg(i) . One can use the Chernoff bound followed by the union bound to show that for any incumbent i, Pr(F(i) /∈ (1 ± τ)[(1 − )F(s|s) + F(s|t)]) < 4 exp „ − deg(i)τ2 3 `` . Next one can use the union bound again to bound the probability of the event ¬X, Pr(¬X) ≤ 4n exp „ − diτ2 3 `` where di = mini∈V \M deg(i), 0 < ≤ 1/2. An analogous argument can be made to show Pr(¬Y ) < 4n exp(− dj τ2 3 ), where dj = minj∈M deg(j) and 0 < ≤ 1/2. Thus, by the union bound, Pr(¬X ∪ ¬Y ) < 8n exp „ − dτ2 3 `` where d = minv∈V deg(v), 0 < ≤ 1/2. Since deg(v) = Ω(nγ ), for all v ∈ V , and , τ and γ are all constants greater than 0, lim n→∞ 8n exp ( dτ2/3) = 0, so Pr(¬X∪¬Y ) = o(1). Thus, we can choose τ small enough such that (1 + τ)[(1 − )F(t|s) + F(t|t)] < (1 − τ)[(1 − )F(s|s)+ F(s|t)], and then choose n large enough such that with probability 1 − o(1), every incumbent will have fitness in the range (1±τ)[(1− )F(s|s)+F(s|t)], and every mutant will have fitness in the range (1 ± τ)[(1 − )F(t|s) + F(t|t)]. So with high probability, every incumbent will have a higher fitness than every mutant. By arguments similar to those following the proof of Theorem 5.1, if we let G = {Kn}∞ n=0, each incumbent will have the same fitness and each mutant will have the same fitness. Furthermore, since s is an ESS for G, the incumbent fitness must be higher than the mutant fitness. Here again, one has to show show that as n → ∞, the incumbent fitness converges to (1 − )F(s|s) + F(s|t), and the mutant fitness converges to (1 − )F(t|s) + F(t|t). Observe that the exact fraction mutants of Vn is now a random variable. So to prove this convergence we use an argument similar to one that is used to prove that sequence of random variables that converges in probability also converges in distribution (details omitted). This in turn establishes that s must be a classical ESS, and we thus obtain a converse to Theorem 5.2. This argument is made rigorous in Section A.2. The assumption on the degree of each vertex of Theorem 5.2 is rather strong. The following theorem relaxes this requirement and only necessitates that every graph have n1+γ edges, for some constant γ > 0, in which case it shows there will alway be at least 1 mutant with an incumbent neighbor of higher fitness. A strategy that is an ESS in this weakened sense will essentially rule out stable, static sets of mutant invasions, but not more complex invasions. An example of more complex invasions are mutant sets that survive, but only by perpetually migrating through the graph under some natural evolutionary dynamics, akin to gliders in the well-known Game of Life [1]. Theorem 5.3. Let F be any game, and let s be a classical ESS of F, and let t = s be a mutant strategy. For any graph family G = {Gn = (Vn, En)}∞ n=0 in which |En| ≥ n1+γ (for any constant γ > 0), and any mutant family M = {Mn}∞ n=0 which is determined by labeling each vertex a mutant with probability , where t > > 0, the probability that there exists a mutant with an incumbent neighbor of higher fitness approaches 1 as n → ∞. Proof. (Sketch) The main idea behind the proof is to show that with high probability, over only the choice of mutants, there will be an incumbent-mutant edge in which both vertices have high degree. If their degree is high enough, we can show that close to an fraction of their neighbors are mutants, and thus their fitnesses are very close to what we expect them to be in the classical case. Since s is an ESS, the fitness of the incumbent will be higher than the mutant. We call an edge (i, j) ∈ En a g(n)-barbell if deg(i) ≥ g(n) and deg(j) ≥ g(n). Suppose Gn has at most h(n) edges that are g(n)-barbells. This means there are at least |En| − h(n) edges in which at least one vertex has degree at most g(n). We call these vertices light vertices. Let (n) be the number of light vertices in Gn. Observe that |En|−h(n) ≤ (n)g(n). This is because each light vertex is incident on at most g(n) edges. This gives us that |En| ≤ h(n) + (n)g(n) ≤ h(n) + ng(n). So if we choose h(n) and g(n) such that h(n) + ng(n) = o(n1+γ ), then |En| = o(n1+γ ). This contradicts the assumption that |En| = Ω(n1+γ ). Thus, subject to the above constraint on h(n) and g(n), Gn must contain at least h(n) edges that are g(n)-barbells. Now let Hn denote the subgraph induced by the barbell edges of Gn. Note that regardless of the structure of Gn, there is no reason that Hn should be connected. Thus, let m be the number of connected components of Hn, and let c1, c2, ... , cm be the number of vertices in each of these connected components. Note that since Hn is an edge-induced subgraph we have ck ≥ 2 for all components k. Let us choose the mutant set by first flipping the vertices in Hn only. We now show that the probability, with respect to the random mutant set, that none of the components of Hn have an incumbent-mutant edge is exponentially small in n. Let An be the event that every component of Hn contains only mutants or only incumbents. Then algebraic manipulations can establish that Pr[An] = Πm k=1( ck + (1 − )ck ) ≤ (1 − )(1− β2 2 ) Pm k=1 ck 204 where β is a constant. Thus for sufficiently small the bound decreases exponentially with Pm k=1 ck. Furthermore, sincePm k=1 `ck 2 ´ ≥ h(n) (with equality achieved by making each component a clique), one can show that Pm k=1 ck ≥ p h(n). Thus, as long as h(n) → ∞ with n, the probability that all components are uniformly labeled will go to 0. Now assuming that there exists a non-uniformly labeled component, by construction that component contains an edge (i, j) where i is an incumbent and j is a mutant, that is a g(n)-barbell. We also assume that the h(n) vertices already labeled have been done so arbitrarily, but that the remaining g(n) − h(n) vertices neighboring i and j are labeled mutants independently with probability . Then via a standard Chernoff bound argument, one can show that with high probability, the fraction of mutants neighboring i and the fraction of mutants neighboring j is in the range (1 ± τ)(g(n)−h(n)) g(n) . Similarly, one can show that the fraction of incumbents neighboring i and the fraction of mutants neighboring j is in the range 1 − (1 ± τ)(g(n)−h(n)) g(n) . Since s is an ESS, there exists a ζ > 0 such that (1 − )F(s|s) + F(s|t) = (1 − )F(t|s) + F(t|t) + ζ. If we choose g(n) = nγ , and h(n) = o(g(n)), we can choose n large enough and τ small enough to force F(i) > F(j), as desired. 6. LIMITATIONS OF STRONGER MODELS In this section we show that if one tried to strengthen the model described in Section 3 in two natural ways, one would not be able to prove results as strong as Theorems 5.1 and 5.2, which hold for every 2-player, symmetric game. 6.1 Stronger Contraction for the Mutant Set In Section 3 we alluded to the fact that we made certain design decisions in arriving at Definitions 3.1, 3.2 and 3.3. One such decision was to require that all but o(n) mutants have incumbent neighbors of higher fitness. Instead, we could have required that all mutants have an incumbent neighbor of higher fitness. The two theorems in this subsection show that if one were to strengthen our notion of contraction for the mutant set, given by Definition 3.1, in this way, it would be impossible to prove theorems analogous to Theorems 5.1 and 5.3. Recall that Definition 3.1 gave the notion of contraction for a linear sized subset of mutants. In what follows, we will say an edge (i, j) contracts if i is an incumbent, j is a mutant, and F(i) > F(j). Also, recall that Theorem 5.1 stated that if s is a classical ESS, then it is an ESS for random graphs with adversarial mutations. Next, we prove that if we instead required every incumbent-mutant edge to contract, this need not be the case. Theorem 6.1. Let F be a 2-player, symmetric game that has a classical ESS s for which there exists a mutant strategy t = s with F(t|t) > F(s|s) and F(t|t) > F(s|t). Let G = {Gn}∞ n=0 be an infinite family of random graphs drawn according to Gn,p, where p = Ω(1/nc ) for any constant 0 ≤ c < 1. Then with probability approaching 1 as n → ∞, there exists a mutant family M = {Mn}∞ n=0, where tn > |Mn| > n and t, > 0, in which there is an edge that does not contract. Proof. (Sketch) With probability approaching 1 as n → ∞, there exists a vertex j where deg(j) is arbitrarily close to n. So label j mutant, label one of its neighbors incumbent, denoted i, and label the rest of j``s neighborhood mutant. Also, label all of i``s neighbors incumbent, with the exception of j and j``s neighbors (which were already labeled mutant). In this setting, one can show that F(j) will be arbitrarily close to F(t|t) and F(i) will be a convex combination of F(s|s) and F(s|t), which are both strictly less than F(t|t). Theorem 5.3 stated that if s is a classical ESS, then for graphs where |En| ≥ n1+γ , for some γ > 0, and where each organism is labeled a mutant with probability , one edge must contract. Below we show that, for certain graphs and certain games, there will always exist one edge that will not contract. Theorem 6.2. Let F be a 2-player, symmetric game that has a classical ESS s, such that there exists a mutant strategy t = s where F(t|s) > F(s|t). There exists an infinite family of graphs {Gn = (Vn, En)}∞ n=0, where |En| = Θ(n2 ), such that for a mutant family M = {Mn}∞ n=0, which is determined by labeling each vertex a mutant with probability > 0, the probability there exists an edge in En that does not contract approaches 1 as n → ∞. Proof. (Sketch) Construct Gn as follows. Pick n/4 vertices u1, u2, ... , un/4 and add edges such that they from a clique. Then, for each ui, i ∈ [n/4] add edges (ui, vi), (vi, wi) and (wi, xi). With probability 1 as n → ∞, there exists an i such that ui and wi are mutants and vi and xi are incumbents. Observe that F(vi) = F(xi) = F(s|t) and F(wi) = F(t|s). 6.2 Stronger Contraction for Individuals The model of Section 3 requires that for an edge (i, j) to contract, the fitness of i must be greater than the fitness of j. One way to strengthen this notion of contraction would be to require that the maximum fitness incumbent in the neighborhood of j be more fit than the maximum fitness mutant in the neighborhood of j. This models the idea that each organism is trying to take over each place in its neighborhood, but only the most fit organism in the neighborhood of a vertex gets the privilege of taking it. If we assume that we adopt this notion of contraction for individual mutants, and require that all incumbent-mutant edges contract, we will next show that Theorems 6.1 and 6.2 still hold, and thus it is still impossible to get results such as Theorems 5.1 and 5.3 which hold for every 2-player, symmetric game. In the proof of Theorem 6.1 we proved that F(i) is strictly less than F(j). Observe that maximum fitness mutant in the neighborhood of j must have fitness at least F(j). Also observe that there is only 1 incumbent in the neighborhood of j, namely i. So under this stronger notion of contraction, the edge (i, j) will not contract. Similarly, in the proof of Theorem 6.2, observe that the only mutant in the neighborhood of wi is wi itself, which has fitness F(t|s). Furthermore, the only incumbents in the neighborhood of wi are vi and xi, both of which have fitness F(s|t). By assumption, F(t|s) > F(s|t), thus, under this stronger notion of contraction, neither of the incumbentmutant edges, (vi, wi) and (xi, wi), will contract. 7. REFERENCES [1] Elwyn R. Berlekamp, John Horton Conway, and Richard K. Guy. Winning Ways for Your 205 Mathematical Plays, volume 4. AK Peters, Ltd, March 2004. [2] Jonas Bj¨ornerstedt and Karl H. Schlag. On the evolution of imitative behavior. Discussion Paper B-378, University of Bonn, 1996. [3] L. E. Blume. The statistical mechanics of strategic interaction. Games and Economic Behavior, 5:387-424, 1993. [4] L. E. Blume. The statistical mechanics of best-response strategy revision. Games and Economic Behavior, 11(2):111-145, November 1995. [5] B. Bollob´as. Random Graphs. Cambridge University Press, 2001. [6] Michael Suk-Young Chwe. Communication and coordination in social networks. Review of Economic Studies, 67:1-16, 2000. [7] Glenn Ellison. Learning, local interaction, and coordination. Econometrica, 61(5):1047-1071, Sept. 1993. [8] I. Eshel, L. Samuelson, and A. Shaked. Altruists, egoists, and hooligans in a local interaction model. The American Economic Review, 88(1), 1998. [9] Geoffrey R. Grimmett and David R. Stirzaker. Probability and Random Processes. Oxford University Press, 3rd edition, 2001. [10] M. Jackson. A survey of models of network formation: Stability and efficiency. In Group Formation in Economics; Networks, Clubs and Coalitions. Cambridge University Press, 2004. [11] S. Kakade, M. Kearns, J. Langford, and L. Ortiz. Correlated equilibria in graphical games. ACM Conference on Electronic Commerce, 2003. [12] S. Kakade, M. Kearns, L. Ortiz, R. Pemantle, and S. Suri. Economic properties of social networks. Neural Information Processing Systems, 2004. [13] M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. Conference on Uncertainty in Artificial Intelligence, pages 253-260, 2001. [14] E. Lieberman, C. Hauert, and M. A. Nowak. Evolutionary dynamics on graphs. Nature, 433:312-316, 2005. [15] S. Morris. Contagion. Review of Economic Studies, 67(1):57-78, 2000. [16] Karl H. Schlag. Why imitate and if so, how? Journal of Economic Theory, 78:130-156, 1998. [17] J. M. Smith. Evolution and the Theory of Games. Cambridge University Press, 1982. [18] William L. Vickery. How to cheat against a simple mixed strategy ESS. Journal of Theoretical Biology, 127:133-139, 1987. [19] J¨orgen W. Weibull. Evolutionary Game Theory. The MIT Press, 1995. APPENDIX A. GRAPHICAL AND CLASSICAL ESS In this section we explore the conditions under which a graphical ESS is also a classical ESS. To do so, we state and prove two theorems which provide converses to each of the major theorems in Section 3. A.1 Random Graphs, Adversarial Mutations Theorem 5.2 states that if s is a classical ESS and G = {Gn,p}, where p = Ω(1/nc ) and 0 ≤ c < 1, then with probability 1 as n → ∞, s is an ESS with respect to G. Here we show that if s is an ESS with respect to G, then s is a classical ESS. In order to prove this theorem, we do not need the full generality of s being an ESS for G when p = Ω(1/nc ) where 0 ≤ c < 1. All we need is s to be an ESS for G when p = 1. In this case there are no more probabilistic events in the theorem statement. Also, since p = 1 each graph in G is a clique, so if one incumbent has a higher fitness than one mutant, then all incumbents have higher fitness than all mutants. This gives rise to the following theorem. Theorem A.1. Let F be any 2-player, symmetric game, and suppose s is a strategy for F and t = s is a mutant strategy. Let G = {Kn}∞ n=0. If, as n → ∞, for any t-linear family of mutants M = {Mn}∞ n=0, there exists an incumbent i and a mutant j such that F(i) > F(j), then s is a classical ESS of F. The proof of this theorem analyzes the limiting behavior of the mutant population as the size of the cliques in G tends to infinity. It also shows how the definition of ESS given in Section 5 recovers the classical definition of ESS. Proof. Since each graph in G is a clique, every incumbent will have the same number of incumbent and mutant neighbors, and every mutant will have the same number of incumbent and mutant neighbors. Thus, all incumbents will have identical fitness and all mutants will have identical fitness. Next, one can construct an t-linear mutant family M, where the fraction of mutants converges to for any , where t > > 0. So for n large enough, the number of mutants in Kn will be arbitrarily close to n. Thus, any mutant subset of size n will result in all incumbents having fitness (1 − n n−1 )F(s|s) + n n−1 F(s|t), and all mutants having fitness (1 − n−1 n−1 )F(t|s) + n−1 n−1 F(t|t). Furthermore, by assumption the incumbent fitness must be higher than the mutant fitness. This implies, lim n→∞ „ (1 − n n − 1 )F(s|s) + n n − 1 F(s|t) > (1 − n − 1 n − 1 )F(t|s) + n − 1 n − 1 F(t|t) `` = 1. This implies, (1− )F(s|s)+ F(s|t) > (1− )F(t|s)+ F(t|t), for all , where t > > 0. A.2 Adversarial Graphs, Random Mutations Theorem 5.2 states that if s is a classical ESS for a 2player, symmetric game F, where G is chosen adversarially subject to the constraint that the degree of each vertex is Ω(nγ ) (for any constant γ > 0), and mutants are chosen with probability , then s is an ESS with respect to F, G, and M. Here we show that if s is an ESS with respect to F, G, and M then s is a classical ESS. All we will need to prove this is that s is an ESS with respect to G = {Kn}∞ n=0, that is when each vertex has degree n − 1. As in Theorem A.1, since the graphs are cliques, if one incumbent has higher fitness than one mutant, then all incumbents have higher fitness than all mutants. Thus, the theorem below is also a converse to Theorem 5.3. (Recall that Theorem 5.3 uses a weaker notion of contraction that 206 requires only one incumbent to have higher fitness than one mutant.) Theorem A.2. Let F be any 2-player symmetric game, and suppose s is an incumbent strategy for F and t = s is a mutant strategy. Let G = {Kn}∞ n=0. If with probability 1 as n → ∞, s is an ESS for G and a mutant family M = {Mn}∞ n=0, which is determined by labeling each vertex a mutant with probability , where t > > 0, then s is a classical ESS of F. This proof also analyzes the limiting behavior of the mutant population as the size of the cliques in G tends to infinity. Since the mutants are chosen randomly we will use an argument similar to the proof that a sequence of random variables that converges in probability, also converge in distribution. In this case the sequence of random variables will be actual fraction of mutants in each Kn. Proof. Fix any value of , where n > > 0, and construct each Mn by labeling a vertex a mutant with probability . By the same argument as in the proof of Theorem A.1, if the actual number of mutants in Kn is denoted by nn, any mutant subset of size nn will result in all incumbents having fitness (1 − nn n−1 )F(s|s) + nn n−1 F(s|t), and in all mutants having fitness (1 − nn−1 n−1 )F(t|s) + nn−1 n−1 F(t|t). This implies lim n→∞ Pr(s is an ESS for Gn w.r.t. nn mutants) = 1 ⇒ lim n→∞ Pr „ (1 − nn n − 1 )F(s|s) + nn n − 1 F(s|t) > (1 − nn − 1 n − 1 )F(t|s) + nn − 1 n − 1 F(t|t) `` = 1 ⇔ lim n→∞ Pr „ n > F(t|s) − F(s|s) F(s|t) − F(s|s) − F(t|t) + F(t|s) + F(s|s) − F(t|t) n `` = 1 (3) By two simple applications of the Chernoff bound and an application of the union bound, one can show the sequence of random variables { n}∞ n=0 converges to in probability. Next, if we let Xn = − n, X = − , b = −F(s|s) + F(t|t), and a = − F (t|s)−F (s|s) F (s|t)−F (s|s)−F (t|t)+F (t|s) , by Theorem A.3 below, we get that limn→∞ Pr(Xn < a + b/n) = Pr(X < a). Combining this with equation 3, Pr( > −a) = 1. The proof of the following theorem is very similar to the proof that a sequence of random variables that converges in probability, also converge in distribution. A good explanation of this can be found in [9], which is the basis for the argument below. Theorem A.3. If {Xn}∞ n=0 is a sequence of random variables that converge in probability to the random variable X, and a and b are constants, then limn→∞ Pr(Xn < a+b/n) = Pr(X < a). Proof. By Lemma A.1 (see below) we have the following two inequalities, Pr(X < a + b/n − τ) ≤ Pr(Xn < a + b/n) + Pr(|X − Xn| > τ), Pr(Xn < a + b/n) ≤ Pr(X < a + b/n + τ) + Pr(|X − Xn| > τ). Combining these gives, Pr(X < a + b/n − τ) − Pr(|X − Xn| > τ) ≤ Pr(Xn < a + b/n) ≤ Pr(X < a + b/n + τ) + Pr(|X − Xn| > τ). There exists an n0 such that for all n > n0, |b/n| < τ, so the following statement holds for all n > n0. Pr(X < a − 2τ) − Pr(|X − Xn| > τ) ≤ Pr(Xn < a + b/n) ≤ Pr(X < a + 2τ) + Pr(|X − Xn| > τ). Take the limn→∞ of both sides of both inequalities, and since Xn converges in probability to X, Pr(X < a − 2τ) ≤ lim n→∞ Pr(Xn < a + b/n) (4) ≤ Pr(X < a + 2τ). (5) Recall that X is a continuous random variable representing the fraction of mutants in an infinite sized graph. So if we let FX (a) = Pr(X < a), we see that FX (a) is a cumulative distribution function of a continuous random variable, and is therefore continuous from the right. So lim τ↓0 FX (a − τ) = lim τ↓0 FX (a + τ) = FX (a). Thus if we take the limτ↓0 of inequalities 4 and 5 we get Pr(X < a) = lim n→∞ Pr(Xn < a + b/n). The following lemma is quite useful, as it expresses the cumulative distribution of one random variable Y , in terms of the cumulative distribution of another random variable X and the difference between X and Y . Lemma A.1. If X and Y are random variables, c ∈ and τ > 0, then Pr(Y < c) ≤ Pr(X < c + τ) + Pr(|Y − X| > τ). Proof. Pr(Y < c) = Pr(Y < c, X < c + τ) + Pr(Y < c, X ≥ c + τ) ≤ Pr(Y < c | X < c + τ) Pr(X < c + τ) + Pr(|Y − X| > τ) ≤ Pr(X < c + τ) + Pr(|Y − X| > τ) 207
Networks Preserving Evolutionary Equilibria and the Power of Randomization We study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. We generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. 1. INTRODUCTION In this paper, we introduce and examine a natural extension of classical evolutionary game theory (EGT) to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. This extension generalizes the classical setting, in which all pairs of organisms in an infinite population are equally likely to interact. The classical setting can be viewed as the special case in which the underlying network is a clique. There are many obvious reasons why one would like to examine more general graphs, the primary one being in that many scenarios considered in evolutionary game theory, all interactions are in fact not possible. For example, geographical restrictions may limit interactions to physically proximate pairs of organisms. More generally, as evolutionary game theory has become a plausible model not only for biological interaction, but also economic and other kinds of interaction in which certain dynamics are more imitative than optimizing (see [2, 16] and chapter 4 of [19]), the network constraints may come from similarly more general sources. Evolutionary game theory on networks has been considered before, but not in the generality we will do so here (see Section 4). We generalize the definition of an evolutionary stable strategy (ESS) to networks, and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. The work described here is part of recent efforts examining the relationship between graph topology or structure and properties of equilibrium outcomes. Previous works in this line include studies of the relationship of topology to properties of correlated equilibria in graphical games [11], and studies of price variation in graph-theoretic market exchange models [12]. More generally, this work contributes to the line of graph-theoretic models for game theory investigated in both computer science [13] and economics [10]. 2. CLASSICAL EGT The fundamental concept of evolutionary game theory is the evolutionarily stable strategy (ESS). Intuitively, an ESS is a strategy such that if all the members of a population adopt it, then no mutant strategy could invade the population [17]. To make this more precise, we describe the basic model of evolutionary game theory, in which the notion of an ESS resides. The standard model of evolutionary game theory considers an infinite population of organisms, each of which plays a strategy in a fixed, 2-player, symmetric game. The game is defined by a fitness function F. All pairs of members of the infinite population are equally likely to interact with one another. If two organisms interact, one playing strategy s and the other playing strategy t, the s-player earns a fitness of F (s | t) while the t-player earns a fitness of F (t | s). In this infinite population of organisms, suppose there is a 1 − e fraction who play strategy s, and call these organisms incumbents; and suppose there is an a fraction who play t, and call these organisms mutants. Assume two organisms are chosen uniformly at random to play each other. The strategy s is an ESS if the expected fitness of an organism playing s is higher than that of an organism playing t, for all t = s and all sufficiently small E. Since an incumbent will meet another incumbent with probability 1 − e and it will meet a mutant with probability e, we can calculate the expected fitness of an incumbent, which is simply (1 − e) F (s | s) + eF (s | t). Similarly, the expected fitness of a mutant is (1 − e) F (t | s) + eF (t | t). Thus we come to the formal definition of an ESS [19]. A consequence of this definition is that for s to be an ESS, it must be the case that F (s | s)> F (t | s), for all strategies t. This inequality means that s must be a best response to itself, and thus any ESS strategy s must also be a Nash equilibrium. In general the notion of ESS is more restrictive than Nash equilibrium, and not all 2-player, symmetric games have an ESS. In this paper our interest is to examine what kinds of network structure preserve the ESS strategies for those games that do have a standard ESS. First we must of course generalize the definition of ESS to a network setting. 3. EGT ON GRAPHS In our setting, we will no longer assume that two organisms are chosen uniformly at random to interact. Instead, we assume that organisms interact only with those in their local neighborhood, as defined by an undirected graph or network. As in the classical setting (which can be viewed as the special case of the complete network or clique), we shall assume an infinite population, by which we mean we examine limiting behavior in a family of graphs of increasing size. Before giving formal definitions, some comments are in order on what to expect in moving from the classical to the graph-theoretic setting. In the classical (complete graph) setting, there exist many symmetries that may be broken in moving to the the network setting, at both the group and individual level. Indeed, such asymmetries are the primary interest in examining a graph-theoretic generalization. For example, at the group level, in the standard ESS definition, one need not discuss any particular set of mutants of population fraction E. Since all organisms are equally likely to interact, the survival or fate of any specific mutant set is identical to that of any other. In the network setting, this may not be true: some mutant sets may be better able to survive than others due to the specific topologies of their interactions in the network. For instance, foreshadowing some of our analysis, if s is an ESS but F (t | t) is much larger than F (s | s) and F (s | t), a mutant set with a great deal of "internal" interaction (that is, edges between mutants) may be able to survive, whereas one without this may suffer. At the level of individuals, in the classical setting, the assertion that one mutant dies implies that all mutants die, again by symmetry. In the network setting, individual fates may differ within a group all playing a common strategy. These observations imply that in examining ESS on networks we face definitional choices that were obscured in the classical model. If G is a graph representing the allowed pairwise interactions between organisms (vertices), and u is a vertex of G playing strategy su, then the fitness of u is given by Here sv is the strategy being played by the neighbor v, and r (u) = {v G V: (u, v) G E}. One can view the fitness of u as the average fitness u would obtain if it played each if its neighbors, or the expected fitness u would obtain if it were assigned to play one of its neighbors chosen uniformly at random. Classical evolutionary game theory examines an infinite, symmetric population. Graphs or networks are inherently finite objects, and we are specifically interested in their asymmetries, as discussed above. Thus all of our definitions shall revolve around an infinite family G = {Gn} n ° 0 of finite graphs Gn over n vertices, but we shall examine asymptotic (large n) properties of such families. We first give a definition for a family of mutant vertex sets in such an infinite graph family to contract. A reasonable alternative would be to ask that the condition above hold for all mutants rather than all but o (n). Note also that we only require that a mutant have one incumbent neighbor of higher fitness in order to die; one might considering requiring more. In Sections 6.1 and 6.2 we consider these stronger conditions and demonstrate that our results can no longer hold. In order to properly define an ESS for an infinite family of finite graphs in a way that recovers the classical definition asymptotically in the case of the family of complete graphs, we first must give a definition that restricts attention to families of mutant vertices that are smaller than some invasion threshold E 'n, yet remain some constant fraction of the population. This prevents "invasions" that survive merely by constituting a vanishing fraction of the population. DEFINITION 3.2. Let E> 0, and let G = {Gn} n ° 0 be an infinite family of graphs, where Gn has n vertices. Let M = {Mn} n ° 0 be any family of (mutant) vertices in Gn. We say that M is E' - linear if there exists an e, E> e> 0, such that for all sufficiently large n, En> | Mn |> en. We can now give our definition for a strategy to be evolutionarily stable when employed by organisms interacting with their neighborhood in a graph. DEFINITION 3.3. Let G = {Gn} n = 0 be an infinite family of graphs, where Gn has n vertices. Let F be any 2-player, symmetric game for which s is a strategy. We say that s is an ESS with respect to F and G if for all mutant strategies t = s, there exists an et> 0 such that for any et-linear family of mutant vertices M = {Mn} n = 0 all playing t, for n sufficiently large, Mn contracts. Thus, to violate the ESS property for G, one must witness a family of mutations M in which each Mn is an arbitrarily small but nonzero constant fraction of the population of Gn, but does not contract (i.e. every mutant set has a subset of linear size that survives all of its incumbent interactions). In Section A. 1 we show that the definition given coincides with the classical one in the case where G is the family of complete graphs, in the limit of large n. We note that even in the classical model, small sets of mutants were allowed to have greater fitness than the incumbents, as long as the size of the set was o (n) [18]. In the definition above there are three parameters: the game F, the graph family G and the mutation family M. Our main results will hold for any 2-player, symmetric game F. We will also study two rather general settings for G and M: that in which G is a family of random graphs and M is arbitrary, and that in which G is nearly arbitrary and M is randomly chosen. In both cases, we will see that, subject to conditions on degree or edge density (essentially forcing connectivity of G but not much more), for any 2-player, symmetric game, the ESS of the classical settings, and only those strategies, are always preserved. Thus a common theme of these results is the power of randomization: as long as either the network itself is chosen randomly, or the mutation set is chosen randomly, classical ESS are preserved. 4. RELATED WORK There has been previous work that analyzes which strategies are resilient to mutant invasions with respect to various types of graphs. What sets our work apart is that the model we consider encompasses a significantly more general class of games and graph topologies. We will briefly survey this literature and point out the differences in the previous models and ours. In [8], [3], and [4], the authors consider specific families of graphs, such as cycles and lattices, where players play specific games, such as 2 × 2-games or k × k-coordination games. In these papers the authors specify a simple, local dynamic for players to improve their payoffs by changing strategies, and analyze what type of strategies will grow to dominate the population. The model we propose is more general than both of these, as it encompasses a larger class of graphs as well as a richer set of games. Also related to our work is that of [14], where the authors propose two models. The first assumes organisms interact according to a weighted, undirected graph. However, the fitness of each organism is simply assigned and does not depend on the actions of each organism's neighborhood. The second model has organisms arranged around a directed cycle, where neighbors play a 2 × 2-game. With probability proportional to its fitness, an organism is chosen to reproduce by placing a replica of itself in its neighbors position, thereby "killing" the neighbor. We consider more general games than the first model and more general graphs than the second. Finally, the works most closely related to ours are [7], [15], and [6]. The authors consider 2-action, coordination games played by players in a general undirected graph. In these three works, the authors specify a dynamic for a strategy to reproduce, and analyze properties of the graph that allow a strategy to overrun the population. Here again, one can see that our model is more general than these, as it allows for organisms to play any 2-player, symmetric game. 5. NETWORKS PRESERVING ESS We now proceed to state and prove two complementary results in the network ESS model defined in Section 3. First, we consider a setting where the graphs are generated via the Gn, p model of Erd˝os and R ´ enyi [5]. In this model, every pair of vertices are joined by an edge independently and with probability p (where p may depend on n). The mutant set, however, will be constructed adversarially (subject to the linear size constraint given by Definition 3.3). For these settings, we show that for any 2-player, symmetric game, s is a classical ESS of that game, if and only if s is an ESS for {Gn, p} n = 0, where p = 52 (1/nc) and 0 <c <1, and any mutant family {Mn} n = 0, where each Mn has linear size. We note that under these settings, if we let c = 1 − - y for small - y> 0, the expected number of edges in Gn is n1 + or larger--that is, just superlinear in the number of vertices and potentially far smaller than O (n2). It is easy to convince oneself that once the graphs have only a linear number of edges, we are flirting with disconnectedness, and there may simply be large mutant sets that can survive in isolation due to the lack of any incumbent interactions in certain games. Thus in some sense we examine the minimum plausible edge density. The second result is a kind of dual to the first, considering a setting where the graphs are chosen arbitrarily (subject to conditions) but the mutant sets are chosen randomly. It states that for any 2-player, symmetric game, s is a classical ESS for that game, if and only if s is an ESS for any {Gn = (Vn, En)} n = 0 in which for all v E Vn, deg (v) = 52 (n) (for any constant - y> 0), and a family of mutant sets {Mn} n = 0, that is chosen randomly (that is, in which each organism is labeled a mutant with constant probability e> 0). Thus, in this setting we again find that classical ESS are preserved subject to edge density restrictions. Since the degree assumption is somewhat strong, we also prove another result which only assumes that | En |> n1 +, and shows that there must exist at least 1 mutant with an incumbent neighbor of higher fitness (as opposed to showing that all but o (n) mutants have an incumbent neighbor of higher fitness). As will be discussed, this rules out "stationary" mutant invasions. 5.1 Random Graphs, Adversarial Mutations Now we state and prove a theorem which shows that if s is a classical ESS, then s will be an ESS for random graphs, where a linear sized set of mutants is chosen by an adversary. THEOREM 5.1. Let F be any 2-player, symmetric game, and suppose s is a classical ESS of F. Let the infinite graph family {Gn} n = 0 be drawn according to Gn, p, where p = 52 (1/nc) and 0 <c <1. Then with probability 1, s is an ESS. The main idea of the proof is to divide mutants into 2 categories, those with "normal" fitness and those with "ab normal" fitness. First, we show all but o (n) of the population (incumbent or mutant) have an incumbent neighbor of normal fitness. This will imply that all but o (n) of the mutants of normal fitness have an incumbent neighbor of higher fitness. The vehicle for proving this is Theorem 2.15 of [5], which gives an upper bound on the number of vertices not connected to a sufficiently large set. This theorem assumes that the size of this large set is known with equality, which necessitates the union bound argument below. Secondly, we show that there can be at most o (n) mutants with abnormal fitness. Since there are so few of them, even if none of them have an incumbent neighbor of higher fitness, s will still be an ESS with respect to F and G. PROOF. (Sketch) Let t = s be the mutant strategy. Since s is a classical ESS, there exists an et such that (1 − e) F (s | s) + eF (s | t)> (1 − e) F (t | s) + eF (t | t), for all 0 <e <et. Let M be any mutant family that is et-linear. Thus for any fixed value of n that is sufficiently large, there exists an e such that | Mn | = en and et> e> 0. Also, let In = Vn \ Mn and let I In be the set of incumbents that have fitness in the range (1 ± 7) [(1 − e) F (s | s) + eF (s | t)] for some constant T, 0 <T <1/6. Lemma 5.1 below shows (1 − e) n | I | (For the sake of clarity we suppress the subscript n on the sets I and T.) The union bound gives us Letting S = n − for some - y> 0 gives Sn = o (n). We will apply Theorem 2.15 of [5] to the summand on the right hand side of Equation 1. If we let - y = (1 − c) / 2, and combine this with the fact that 0 c <1, all of the requirements of this theorem will be satisfied (details omitted). Now when we apply this theorem to Equation 1, we get This is because equation 2 has only 24 log n 2p terms, and Theorem 2.15 of [5] gives us that C (1 − e) n1 − c − 24 log n 2. Thus we have shown, with probability tending to 1 as n, at most o (n) individuals are not attached to an incumbent which has fitness in the range (1 ± 7) [(1 − e) F (s | s) + eF (s | t)]. This implies that the number of mutants of approximately normal fitness, not attached to an incumbent of approximately normal fitness, is also o (n). Now those mutants of approximately normal fitness that are attached to an incumbent of approximately normal fitness have fitness in the range (1 ± 7) [(1 − e) F (t | s) + eF (t | t)]. The incumbents that they are attached to have fitness in the range (1 ± T) [(1 − e) F (s | s) + eF (s | t)]. Since s is an ESS of F, we know (1 − e) F (s | s) + eF (s | t)> (1 − e) F (t | s) + eF (t | t), thus if we choose T small enough, we can ensure that all but o (n) mutants of normal fitness have a neighboring incumbent of higher fitness. Finally by Lemma 5.1, we know there are at most o (n) mutants of abnormal fitness. So even if all of them are more fit than their respective incumbent neighbors, we have shown all but o (n) of the mutants have an incumbent neighbor of higher fitness. We now state and prove the lemma used in the proof above. LEMMA 5.1. For almost every graph Gn, p with (1 − e) n incumbents, all but 24 log n 2p incumbents have fitness in the range (1 ± 6) [(1 − e) F (s | s) + eF (s | t)], where p = 52 (1/nc) and e, S and c are constants satisfying 0 <e <1, 0 <S <1/6, 0 c <1. Similarly, under the same assumptions, all but 24 log n 2p mutants have fitness in the range (1 ± 6) [(1 − e) F (t | s) + eF (t | t)]. PROOF. We define the mutant degree of a vertex to be the number of mutant neighbors of that vertex, and incumbent degree analogously. Observe that the only way for an incumbent to have fitness far from its expected value of (1 − e) F (s | s) + eF (s | t) is if it has a fraction of mutant neighbors either much higher or much lower than e. Theorem 2.14 of [5] gives us a bound on the number of such incumbents. It states that the number of incumbents with mutant degree outside the range (1 ± S) p | M | is at most 12 log n 2p. By the same theorem, the number of incumbents with incumbent degree outside the range (1 ± S) p | I | is at most 12 log n 2p. From the linearity of fitness as a function of the fraction of mutant or incumbent neighbors, one can show that for those incumbents with mutant and incumbent degree in the expected range, their fitness is within a constant factor of (1 − e) F (s | s) + eF (s | t), where that constant goes to 1 as n tends to infinity and S tends to 0. The proof for the mutant case is analogous. We note that if in the statement of Theorem 5.1 we let c = 0, then p = 1. This, in turn, makes G = {Kn} n = 0, where Kn is a clique of n vertices. Then for any Kn all of the incumbents will have identical fitness and all of the mutants will have identical fitness. Furthermore, since s was an ESS for G, the incumbent fitness will be higher than the mutant fitness. Finally, one can show that as n, the incumbent fitness converges to (1 − e) F (s | s) + eF (s | t), and the mutant fitness converges to (1 − e) F (t | s) + eF (t | t). In other words, s must be a classical ESS, providing a converse to Theorem 5.1. We rigorously present this argument in Section A. 1. 5.2 Adversarial Graphs, Random Mutations We now move on to our second main result. Here we show that if the graph family, rather than being chosen randomly, is arbitrary subject to a minimum degree requirement, and the mutation sets are randomly chosen, classical ESS are again preserved. A modified notion of ESS allows us to considerably weaken the degree requirement to a minimum edge density requirement. THEOREM 5.2. Let G = {Gn = (Vn, En)} n = 0 be an infinite family of graphs in which for all v Vn, deg (v) = 0 (n) (for any constant - y> 0). Let F be any 2-player, symmetric game, and suppose s is a classical ESS of F. Let t be any mutant strategy, and let the mutant family M = {Mn} n = 0 be chosen randomly by labeling each vertex a mutant with constant probability e, where et> e> 0. Then with probability 1, s is an ESS with respect to F, G and M. PROOF. Let t = s be the mutant strategy and let X be the event that every incumbent has fitness within the range (1 ±) [(1 −) F (s | s) + F (s | t)], for some constant> 0 to be specified later. Similarly, let Y be the event that every mutant has fitness within the range (1 ±) [(1 −) F (t | s) + F (t | t)]. Since Pr (X Y) = 1 − Pr (¬ X ¬ Y), we proceed by showing Pr (¬ X ¬ Y) = o (1). ¬ X is the event that there exists an incumbent with fitness outside the range (1 ±) [(1 −) F (s | s) + F (s | t)]. If degM (v) denotes the number of mutant neighbors of v, similarly, degI (v) denotes the number of incumbent neighbors of v, then an incumbent i has fitness degIdeg (i) (i) F (s | s) + degM (i) deg (i) F (s | t). Since F (s | s) and F (s | t) are fixed quantities, the only variation in an incumbents fitness can come from variation in the terms degI (i) deg (i) and degM (i) deg (i). One can use the Chernoff bound followed by the union bound to show that for any incumbent i, where dj = minjEM deg (j) and 0 <1/2. Thus, by the union bound, () where d = minvEV deg (v), 0 <1/2. Since deg (v) = Q (n), for all v V, and, and are all constants greater than 0, so Pr (¬ X ¬ Y) = o (1). Thus, we can choose small enough such that (1 +) [(1 −) F (t | s) + F (t | t)] <(1 −) [(1 −) F (s | s) + F (s | t)], and then choose n large enough such that with probability 1 − o (1), every incumbent will have fitness in the range (1 ±) [(1 −) F (s | s) + F (s | t)], and every mutant will have fitness in the range (1 ±) [(1 −) F (t | s) + F (t | t)]. So with high probability, every incumbent will have a higher fitness than every mutant. By arguments similar to those following the proof of Theorem 5.1, if we let G = {Kn} n = 0, each incumbent will have the same fitness and each mutant will have the same fitness. Furthermore, since s is an ESS for G, the incumbent fitness must be higher than the mutant fitness. Here again, one has to show show that as n, the incumbent fitness converges to (1 −) F (s | s) + F (s | t), and the mutant fitness converges to (1 −) F (t | s) + F (t | t). Observe that the exact fraction mutants of Vn is now a random variable. So to prove this convergence we use an argument similar to one that is used to prove that sequence of random variables that converges in probability also converges in distribution (details omitted). This in turn establishes that s must be a classical ESS, and we thus obtain a converse to Theorem 5.2. This argument is made rigorous in Section A. 2. The assumption on the degree of each vertex of Theorem 5.2 is rather strong. The following theorem relaxes this requirement and only necessitates that every graph have n1 + edges, for some constant> 0, in which case it shows there will alway be at least 1 mutant with an incumbent neighbor of higher fitness. A strategy that is an ESS in this weakened sense will essentially rule out stable, static sets of mutant invasions, but not more complex invasions. An example of more complex invasions are mutant sets that survive, but only by perpetually "migrating" through the graph under some natural evolutionary dynamics, akin to "gliders" in the well-known Game of Life [1]. THEOREM 5.3. Let F be any game, and let s be a classical ESS of F, and let t = s be a mutant strategy. For any graph family G = {Gn = (Vn, En)} n' 0 in which | En | n1 + (for any constant> 0), and any mutant family M = {Mn} n = 0 which is determined by labeling each vertex a mutant with probability, where t>> 0, the probability that there exists a mutant with an incumbent neighbor of higher fitness approaches 1 as n. PROOF. (Sketch) The main idea behind the proof is to show that with high probability, over only the choice of mutants, there will be an incumbent-mutant edge in which both vertices have high degree. If their degree is high enough, we can show that close to an fraction of their neighbors are mutants, and thus their fitnesses are very close to what we expect them to be in the classical case. Since s is an ESS, the fitness of the incumbent will be higher than the mutant. We call an edge (i, j) En a g (n) - barbell if deg (i) g (n) and deg (j) g (n). Suppose Gn has at most h (n) edges that are g (n) - barbells. This means there are at least | En | − h (n) edges in which at least one vertex has degree at most g (n). We call these vertices light vertices. Let (n) be the number of light vertices in Gn. Observe that | En | − h (n) (n) g (n). This is because each light vertex is incident on at most g (n) edges. This gives us that So if we choose h (n) and g (n) such that h (n) + ng (n) = o (n1 +), then | En | = o (n1 +). This contradicts the assumption that | En | = Q (n1 +). Thus, subject to the above constraint on h (n) and g (n), Gn must contain at least h (n) edges that are g (n) - barbells. Now let Hn denote the subgraph induced by the barbell edges of Gn. Note that regardless of the structure of Gn, there is no reason that Hn should be connected. Thus, let m be the number of connected components of Hn, and let c1, c2,..., cm be the number of vertices in each of these connected components. Note that since Hn is an edge-induced subgraph we have ck 2 for all components k. Let us choose the mutant set by first flipping the vertices in Hn only. We now show that the probability, with respect to the random mutant set, that none of the components of Hn have an incumbent-mutant edge is exponentially small in n. Let An be the event that every component of Hn contains only mutants or only incumbents. Then algebraic manipulations can establish that where, Q is a constant. Thus for e sufficiently small the bound decreases exponentially with Em k = 1 ck. Furthermore, since , / component a clique), one can show that Em k = 1 ck> h (n). Thus, as long as h (n)--+ oo with n, the probability that all components are uniformly labeled will go to 0. Now assuming that there exists a non-uniformly labeled component, by construction that component contains an edge (i, j) where i is an incumbent and j is a mutant, that is a g (n) - barbell. We also assume that the h (n) vertices already labeled have been done so arbitrarily, but that the remaining g (n)--h (n) vertices neighboring i and j are labeled mutants independently with probability E. Then via a standard Chernoff bound argument, one can show that with high probability, the fraction of mutants neighboring i and the fraction of mutants neighboring j is in the range g (n). Similarly, one can show that the fraction of incumbents neighboring i and the fraction of mutants neighboring j is in the range 1--(1 f z) (g (n)--h (n)) Since s is an ESS, there exists a (> 0 such that (1--e) F (s1s) + eF (s1t) = (1--e) F (t1s) + eF (t1t) + C. If we choose g (n) = n, and h (n) = o (g (n)), we can choose n large enough and T small enough to force F (i)> F (j), as desired. 6. LIMITATIONS OF STRONGER MODELS In this section we show that if one tried to strengthen the model described in Section 3 in two natural ways, one would not be able to prove results as strong as Theorems 5.1 and 5.2, which hold for every 2-player, symmetric game. 6.1 Stronger Contraction for the Mutant Set In Section 3 we alluded to the fact that we made certain design decisions in arriving at Definitions 3.1, 3.2 and 3.3. One such decision was to require that all but o (n) mutants have incumbent neighbors of higher fitness. Instead, we could have required that all mutants have an incumbent neighbor of higher fitness. The two theorems in this subsection show that if one were to strengthen our notion of contraction for the mutant set, given by Definition 3.1, in this way, it would be impossible to prove theorems analogous to Theorems 5.1 and 5.3. Recall that Definition 3.1 gave the notion of contraction for a linear sized subset of mutants. In what follows, we will say an edge (i, j) contracts if i is an incumbent, j is a mutant, and F (i)> F (j). Also, recall that Theorem 5.1 stated that if s is a classical ESS, then it is an ESS for random graphs with adversarial mutations. Next, we prove that if we instead required every incumbent-mutant edge to contract, this need not be the case. PROOF. (Sketch) With probability approaching 1 as n--+ oo, there exists a vertex j where deg (j) is arbitrarily close to en. So label j mutant, label one of its neighbors incumbent, denoted i, and label the rest of j's neighborhood mutant. Also, label all of i's neighbors incumbent, with the exception of j and j's neighbors (which were already labeled mutant). In this setting, one can show that F (j) will be arbitrarily close to F (t1t) and F (i) will be a convex combination of F (s1s) and F (s1t), which are both strictly less than F (t1t). Theorem 5.3 stated that if s is a classical ESS, then for graphs where 1En1> n1 +, for some - y> 0, and where each organism is labeled a mutant with probability e, one edge must contract. Below we show that, for certain graphs and certain games, there will always exist one edge that will not contract. a clique. Then, for each ui, i G [n/4] add edges (ui, vi), (vi, wi) and (wi, xi). With probability 1 as n--+ oo, there exists an i such that ui and wi are mutants and vi and xi are incumbents. Observe that F (vi) = F (xi) = F (s1t) and F (wi) = F (t1s). 6.2 Stronger Contraction for Individuals The model of Section 3 requires that for an edge (i, j) to contract, the fitness of i must be greater than the fitness of j. One way to strengthen this notion of contraction would be to require that the maximum fitness incumbent in the neighborhood of j be more fit than the maximum fitness mutant in the neighborhood of j. This models the idea that each organism is trying to take over each place in its neighborhood, but only the most fit organism in the neighborhood of a vertex gets the privilege of taking it. If we assume that we adopt this notion of contraction for individual mutants, and require that all incumbent-mutant edges contract, we will next show that Theorems 6.1 and 6.2 still hold, and thus it is still impossible to get results such as Theorems 5.1 and 5.3 which hold for every 2-player, symmetric game. In the proof of Theorem 6.1 we proved that F (i) is strictly less than F (j). Observe that maximum fitness mutant in the neighborhood of j must have fitness at least F (j). Also observe that there is only 1 incumbent in the neighborhood of j, namely i. So under this stronger notion of contraction, the edge (i, j) will not contract. Similarly, in the proof of Theorem 6.2, observe that the only mutant in the neighborhood of wi is wi itself, which has fitness F (t1s). Furthermore, the only incumbents in the neighborhood of wi are vi and xi, both of which have fitness F (s1t). By assumption, F (t1s)> F (s1t), thus, under this stronger notion of contraction, neither of the incumbentmutant edges, (vi, wi) and (xi, wi), will contract. 7. REFERENCES APPENDIX A. GRAPHICAL AND CLASSICAL ESS In this section we explore the conditions under which a graphical ESS is also a classical ESS. To do so, we state and prove two theorems which provide converses to each of the major theorems in Section 3. A. 1 Random Graphs, Adversarial Mutations Theorem 5.2 states that if s is a classical ESS and G = {Gn, p}, where p = Q (1/n `) and 0 c <1, then with probability 1 as n, s is an ESS with respect to G. Here we show that if s is an ESS with respect to G, then s is a classical ESS. In order to prove this theorem, we do not need the full generality of s being an ESS for G when p = Q (1/n `) where 0 c <1. All we need is s to be an ESS for G when p = 1. In this case there are no more probabilistic events in the theorem statement. Also, since p = 1 each graph in G is a clique, so if one incumbent has a higher fitness than one mutant, then all incumbents have higher fitness than all mutants. This gives rise to the following theorem. THEOREM A. 1. Let F be any 2-player, symmetric game, and suppose s is a strategy for F and t = s is a mutant strategy. Let G = {Kn} n = 0. If, as n, for any et-linear family of mutants M = {Mn} n = 0, there exists an incumbent i and a mutant j such that F (i)> F (j), then s is a classical ESS of F. The proof of this theorem analyzes the limiting behavior of the mutant population as the size of the cliques in G tends to infinity. It also shows how the definition of ESS given in Section 5 recovers the classical definition of ESS. PROOF. Since each graph in G is a clique, every incumbent will have the same number of incumbent and mutant neighbors, and every mutant will have the same number of incumbent and mutant neighbors. Thus, all incumbents will have identical fitness and all mutants will have identical fitness. Next, one can construct an et-linear mutant family M, where the fraction of mutants converges to e for any e, where et> e> 0. So for n large enough, the number of mutants in Kn will be arbitrarily close to en. Thus, any mutant subset of size en will result in all incumbents having fitness (1 − n n − 1) F (s | s) + n − 1 nF (s | t), and all mutants having fitness (1 − n − 1 n − 1) F (t | s) + n − 1 n − 1 F (t | t). Furthermore, by assumption the incumbent fitness must be higher than the mutant fitness. This implies, n − 1) F (t | s) + en − 1 n − 1 F (t | t) = 1. This implies, (1 − e) F (s | s) + eF (s | t)> (1 − e) F (t | s) + eF (t | t), for all e, where et> e> 0. A. 2 Adversarial Graphs, Random Mutations Theorem 5.2 states that if s is a classical ESS for a 2player, symmetric game F, where G is chosen adversarially subject to the constraint that the degree of each vertex is Q (nry) (for any constant - y> 0), and mutants are chosen with probability e, then s is an ESS with respect to F, G, and M. Here we show that if s is an ESS with respect to F, G, and M then s is a classical ESS. All we will need to prove this is that s is an ESS with respect to G = {Kn} n = 0, that is when each vertex has degree n − 1. As in Theorem A. 1, since the graphs are cliques, if one incumbent has higher fitness than one mutant, then all incumbents have higher fitness than all mutants. Thus, the theorem below is also a converse to Theorem 5.3. (Recall that Theorem 5.3 uses a weaker notion of contraction that lim n requires only one incumbent to have higher fitness than one mutant.) THEOREM A. 2. Let F be any 2-player symmetric game, and suppose s is an incumbent strategy for F and t = s is a mutant strategy. Let G = {Kn} n = 0. If with probability 1 as n, s is an ESS for G and a mutant family M = {Mn} n = 0, which is determined by labeling each vertex a mutant with probability, where t>> 0, then s is a classical ESS of F. This proof also analyzes the limiting behavior of the mutant population as the size of the cliques in G tends to infinity. Since the mutants are chosen randomly we will use an argument similar to the proof that a sequence of random variables that converges in probability, also converge in distribution. In this case the sequence of random variables will be actual fraction of mutants in each Kn. PROOF. Fix any value of, where n>> 0, and construct each Mn by labeling a vertex a mutant with probability. By the same argument as in the proof of Theorem A. 1, if the actual number of mutants in Kn is denoted by nn, any mutant subset of size nn will result in all incumbents having fitness (1 − nn n − 1) F (s | s) + nn n − 1 F (s | t), and in all mutants having fitness (1 − nn − 1 n − 1) F (t | s) + nn − 1 n − 1 F (t | t). This Recall that X is a continuous random variable representing the fraction of mutants in an infinite sized graph. So if we let FX (a) = Pr (X <a), we see that FX (a) is a cumulative distribution function of a continuous random variable, and is therefore continuous from the right. So By two simple applications of the Chernoff bound and an application of the union bound, one can show the sequence of random variables {n} n = 0 converges to in probability. Next, if we let Xn = − n, X = −, b = − F (s | s) + F (t | t), and a = − F (s | t) − F (s | s) − F (t | t) + F (t | s), by Theorem A. 3 be F (t | s) − F (s | s) low, we get that limn Pr (Xn <a + b/n) = Pr (X <a). Combining this with equation 3, Pr (> − a) = 1. The proof of the following theorem is very similar to the proof that a sequence of random variables that converges in probability, also converge in distribution. A good explanation of this can be found in [9], which is the basis for the argument below. THEOREM A. 3. If {Xn} n = 0 is a sequence of random variables that converge in probability to the random variable X, and a and b are constants, then limn Pr (Xn <a + b/n) = Pr (X <a). Pr (Xn <a + b/n) + Pr (| X − Xn |>), The following lemma is quite useful, as it expresses the cumulative distribution of one random variable Y, in terms of the cumulative distribution of another random variable X and the difference between X and Y. LEMMA A. 1. If X and Y are random variables, c and> 0, then
Networks Preserving Evolutionary Equilibria and the Power of Randomization We study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. We generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. 1. INTRODUCTION In this paper, we introduce and examine a natural extension of classical evolutionary game theory (EGT) to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. This extension generalizes the classical setting, in which all pairs of organisms in an infinite population are equally likely to interact. The classical setting can be viewed as the special case in which the underlying network is a clique. There are many obvious reasons why one would like to examine more general graphs, the primary one being in that many scenarios considered in evolutionary game theory, all interactions are in fact not possible. For example, geographical restrictions may limit interactions to physically proximate pairs of organisms. More generally, as evolutionary game theory has become a plausible model not only for biological interaction, but also economic and other kinds of interaction in which certain dynamics are more imitative than optimizing (see [2, 16] and chapter 4 of [19]), the network constraints may come from similarly more general sources. Evolutionary game theory on networks has been considered before, but not in the generality we will do so here (see Section 4). We generalize the definition of an evolutionary stable strategy (ESS) to networks, and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. The work described here is part of recent efforts examining the relationship between graph topology or structure and properties of equilibrium outcomes. Previous works in this line include studies of the relationship of topology to properties of correlated equilibria in graphical games [11], and studies of price variation in graph-theoretic market exchange models [12]. More generally, this work contributes to the line of graph-theoretic models for game theory investigated in both computer science [13] and economics [10]. 2. CLASSICAL EGT 3. EGT ON GRAPHS DEFINITION 3.2. Let E> 0, and let G = {Gn} n ° 0 be 4. RELATED WORK There has been previous work that analyzes which strategies are resilient to mutant invasions with respect to various types of graphs. What sets our work apart is that the model we consider encompasses a significantly more general class of games and graph topologies. We will briefly survey this literature and point out the differences in the previous models and ours. In [8], [3], and [4], the authors consider specific families of graphs, such as cycles and lattices, where players play specific games, such as 2 × 2-games or k × k-coordination games. In these papers the authors specify a simple, local dynamic for players to improve their payoffs by changing strategies, and analyze what type of strategies will grow to dominate the population. The model we propose is more general than both of these, as it encompasses a larger class of graphs as well as a richer set of games. Also related to our work is that of [14], where the authors propose two models. The first assumes organisms interact according to a weighted, undirected graph. However, the fitness of each organism is simply assigned and does not depend on the actions of each organism's neighborhood. The second model has organisms arranged around a directed cycle, where neighbors play a 2 × 2-game. With probability proportional to its fitness, an organism is chosen to reproduce by placing a replica of itself in its neighbors position, thereby "killing" the neighbor. We consider more general games than the first model and more general graphs than the second. Finally, the works most closely related to ours are [7], [15], and [6]. The authors consider 2-action, coordination games played by players in a general undirected graph. In these three works, the authors specify a dynamic for a strategy to reproduce, and analyze properties of the graph that allow a strategy to overrun the population. Here again, one can see that our model is more general than these, as it allows for organisms to play any 2-player, symmetric game. 5. NETWORKS PRESERVING ESS 5.1 Random Graphs, Adversarial Mutations 5.2 Adversarial Graphs, Random Mutations 6. LIMITATIONS OF STRONGER MODELS 6.1 Stronger Contraction for the Mutant Set 6.2 Stronger Contraction for Individuals 7. REFERENCES APPENDIX A. GRAPHICAL AND CLASSICAL ESS A. 1 Random Graphs, Adversarial Mutations A. 2 Adversarial Graphs, Random Mutations n − 1 F (t | t). This F (t | s) − F (s | s) Pr (X <a). Pr (Xn <a + b/n) + Pr (| X − Xn |>),
Networks Preserving Evolutionary Equilibria and the Power of Randomization We study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. We generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. 1. INTRODUCTION In this paper, we introduce and examine a natural extension of classical evolutionary game theory (EGT) to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. This extension generalizes the classical setting, in which all pairs of organisms in an infinite population are equally likely to interact. The classical setting can be viewed as the special case in which the underlying network is a clique. There are many obvious reasons why one would like to examine more general graphs, the primary one being in that many scenarios considered in evolutionary game theory, all interactions are in fact not possible. For example, geographical restrictions may limit interactions to physically proximate pairs of organisms. Evolutionary game theory on networks has been considered before, but not in the generality we will do so here (see Section 4). We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them. The work described here is part of recent efforts examining the relationship between graph topology or structure and properties of equilibrium outcomes. Previous works in this line include studies of the relationship of topology to properties of correlated equilibria in graphical games [11], and studies of price variation in graph-theoretic market exchange models [12]. More generally, this work contributes to the line of graph-theoretic models for game theory investigated in both computer science [13] and economics [10]. 4. RELATED WORK There has been previous work that analyzes which strategies are resilient to mutant invasions with respect to various types of graphs. What sets our work apart is that the model we consider encompasses a significantly more general class of games and graph topologies. We will briefly survey this literature and point out the differences in the previous models and ours. In [8], [3], and [4], the authors consider specific families of graphs, such as cycles and lattices, where players play specific games, such as 2 × 2-games or k × k-coordination games. The model we propose is more general than both of these, as it encompasses a larger class of graphs as well as a richer set of games. Also related to our work is that of [14], where the authors propose two models. The first assumes organisms interact according to a weighted, undirected graph. The second model has organisms arranged around a directed cycle, where neighbors play a 2 × 2-game. We consider more general games than the first model and more general graphs than the second. Finally, the works most closely related to ours are [7], [15], and [6]. The authors consider 2-action, coordination games played by players in a general undirected graph. In these three works, the authors specify a dynamic for a strategy to reproduce, and analyze properties of the graph that allow a strategy to overrun the population. Here again, one can see that our model is more general than these, as it allows for organisms to play any 2-player, symmetric game.
H-92
Improving Web Search Ranking by Incorporating User Behavior Information
We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance.
[ "web search", "web search rank", "rank", "user behavior", "inform", "result", "feedback", "user interact", "inform retriev", "relev feedback", "score", "document", "implicit relev feedback" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "U", "U", "M" ]
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Microsoft Research eugeneag@microsoft.com Eric Brill Microsoft Research brill@microsoft.com Susan Dumais Microsoft Research sdumais@microsoft.com ABSTRACT We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance. Categories and Subject Descriptors H.3.3 Information Search and Retrieval - Relevance feedback, search process; H.3.5 Online Information Services - Web-based services. General Terms Algorithms, Measurement, Experimentation 1. INTRODUCTION Millions of users interact with search engines daily. They issue queries, follow some of the links in the results, click on ads, spend time on pages, reformulate their queries, and perform other actions. These interactions can serve as a valuable source of information for tuning and improving web search result ranking and can compliment more costly explicit judgments. Implicit relevance feedback for ranking and personalization has become an active area of research. Recent work by Joachims and others exploring implicit feedback in controlled environments have shown the value of incorporating implicit feedback into the ranking process. Our motivation for this work is to understand how implicit feedback can be used in a large-scale operational environment to improve retrieval. How does it compare to and compliment evidence from page content, anchor text, or link-based features such as inlinks or PageRank? While it is intuitive that user interactions with the web search engine should reveal at least some information that could be used for ranking, estimating user preferences in real web search settings is a challenging problem, since real user interactions tend to be more noisy than commonly assumed in the controlled settings of previous studies. Our paper explores whether implicit feedback can be helpful in realistic environments, where user feedback can be noisy (or adversarial) and a web search engine already uses hundreds of features and is heavily tuned. To this end, we explore different approaches for ranking web search results using real user behavior obtained as part of normal interactions with the web search engine. The specific contributions of this paper include: • Analysis of alternatives for incorporating user behavior into web search ranking (Section 3). • An application of a robust implicit feedback model derived from mining millions of user interactions with a major web search engine (Section 4). • A large scale evaluation over real user queries and search results, showing significant improvements derived from incorporating user feedback (Section 6). We summarize our findings and discuss extensions to the current work in Section 7, which concludes the paper. 2. BACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval. Most common approaches primarily focus on similarity of query and a page, as well as the overall page quality [3,4,24]. However, with increasing popularity of search engines, implicit feedback (i.e., the actions users take when interacting with the search engine) can be used to improve the rankings. Implicit relevance measures have been studied by several research groups. An overview of implicit measures is compiled in Kelly and Teevan [14]. This research, while developing valuable insights into implicit relevance measures, was not applied to improve the ranking of web search results in realistic settings. Closely related to our work, Joachims [11] collected implicit measures in place of explicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions. Fox et al. [8] explored the relationship between implicit and explicit measures in Web search, and developed Bayesian models to correlate implicit measures and explicit relevance judgments for both individual queries and search sessions. This work considered a wide range of user behaviors (e.g., dwell time, scroll time, reformulation patterns) in addition to the popular clickthrough behavior. However, the modeling effort was aimed at predicting explicit relevance judgments from implicit user actions and not specifically at learning ranking functions. Other studies of user behavior in web search include Pharo and Järvelin [19], but were not directly applied to improve ranking. More recently, Joachims et al. [12] presented an empirical evaluation of interpreting clickthrough evidence. By performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthroughs in a controlled, laboratory setting. Unfortunately, the extent to which previous research applies to real-world web search is unclear. At the same time, while recent work (e.g., [26]) on using clickthrough information for improving web search ranking is promising, it captures only one aspect of the user interactions with web search engines. We build on existing research to develop robust user behavior interpretation techniques for the real web search setting. Instead of treating each user as a reliable expert, we aggregate information from multiple, unreliable, user search session traces, as we describe in the next two sections. 3. INCORPORATING IMPLICIT FEEDBACK We consider two complementary approaches to ranking with implicit feedback: (1) treating implicit feedback as independent evidence for ranking results, and (2) integrating implicit feedback features directly into the ranking algorithm. We describe the two general ranking approaches next. The specific implicit feedback features are described in Section 4, and the algorithms for interpreting and incorporating implicit feedback are described in Section 5. 3.1 Implicit Feedback as Independent Evidence The general approach is to re-rank the results obtained by a web search engine according to observed clickthrough and other user interactions for the query in previous search sessions. Each result is assigned a score according to expected relevance/user satisfaction based on previous interactions, resulting in some preference ordering based on user interactions alone. While there has been significant work on merging multiple rankings, we adapt a simple and robust approach of ignoring the original rankers'' scores, and instead simply merge the rank orders. The main reason for ignoring the original scores is that since the feature spaces and learning algorithms are different, the scores are not directly comparable, and re-normalization tends to remove the benefit of incorporating classifier scores. We experimented with a variety of merging functions on the development set of queries (and using a set of interactions from a different time period from final evaluation sets). We found that a simple rank merging heuristic combination works well, and is robust to variations in score values from original rankers. For a given query q, the implicit score ISd is computed for each result d from available user interaction features, resulting in the implicit rank Id for each result. We compute a merged score SM(d) for d by combining the ranks obtained from implicit feedback, Id with the original rank of d, Od: ¡ cents # + + + + = otherwise O dforexistsfeedbackimplicitif OI w wOIdS d dd I IddM 1 1 1 1 1 1 ),,,( where the weight wI is a heuristically tuned scaling factor representing the relative importance of the implicit feedback. The query results are ordered in by decreasing values of SM to produce the final ranking. One special case of this model arises when setting wI to a very large value, effectively forcing clicked results to be ranked higher than un-clicked results - an intuitive and effective heuristic that we will use as a baseline. Applying more sophisticated classifier and ranker combination algorithms may result in additional improvements, and is a promising direction for future work. The approach above assumes that there are no interactions between the underlying features producing the original web search ranking and the implicit feedback features. We now relax this assumption by integrating implicit feedback features directly into the ranking process. 3.2 Ranking with Implicit Feedback Features Modern web search engines rank results based on a large number of features, including content-based features (i.e., how closely a query matches the text or title or anchor text of the document), and query-independent page quality features (e.g., PageRank of the document or the domain). In most cases, automatic (or semiautomatic) methods are developed for tuning the specific ranking function that combines these feature values. Hence, a natural approach is to incorporate implicit feedback features directly as features for the ranking algorithm. During training or tuning, the ranker can be tuned as before but with additional features. At runtime, the search engine would fetch the implicit feedback features associated with each query-result URL pair. This model requires a ranking algorithm to be robust to missing values: more than 50% of queries to web search engines are unique, with no previous implicit feedback available. We now describe such a ranker that we used to learn over the combined feature sets including implicit feedback. 3.3 Learning to Rank Web Search Results A key aspect of our approach is exploiting recent advances in machine learning, namely trainable ranking algorithms for web search and information retrieval (e.g., [5, 11] and classical results reviewed in [3]). In our setting, explicit human relevance judgments (labels) are available for a set of web search queries and results. Hence, an attractive choice to use is a supervised machine learning technique to learn a ranking function that best predicts relevance judgments. RankNet is one such algorithm. It is a neural net tuning algorithm that optimizes feature weights to best match explicitly provided pairwise user preferences. While the specific training algorithms used by RankNet are beyond the scope of this paper, it is described in detail in [5] and includes extensive evaluation and comparison with other ranking methods. An attractive feature of RankNet is both train- and run-time efficiency - runtime ranking can be quickly computed and can scale to the web, and training can be done over thousands of queries and associated judged results. We use a 2-layer implementation of RankNet in order to model non-linear relationships between features. Furthermore, RankNet can learn with many (differentiable) cost functions, and hence can automatically learn a ranking function from human-provided labels, an attractive alternative to heuristic feature combination techniques. Hence, we will also use RankNet as a generic ranker to explore the contribution of implicit feedback for different ranking alternatives. 4. IMPLICIT USER FEEDBACK MODEL Our goal is to accurately interpret noisy user feedback obtained as by tracing user interactions with the search engine. Interpreting implicit feedback in real web search setting is not an easy task. We characterize this problem in detail in [1], where we motivate and evaluate a wide variety of models of implicit user activities. The general approach is to represent user actions for each search result as a vector of features, and then train a ranker on these features to discover feature values indicative of relevant (and nonrelevant) search results. We first briefly summarize our features and model, and the learning approach (Section 4.2) in order to provide sufficient information to replicate our ranking methods and the subsequent experiments. 4.1 Representing User Actions as Features We model observed web search behaviors as a combination of a ``background'''' component (i.e., query- and relevance-independent noise in user behavior, including positional biases with result interactions), and a ``relevance'''' component (i.e., query-specific behavior indicative of relevance of a result to a query). We design our features to take advantage of aggregated user behavior. The feature set is comprised of directly observed features (computed directly from observations for each query), as well as queryspecific derived features, computed as the deviation from the overall query-independent distribution of values for the corresponding directly observed feature values. The features used to represent user interactions with web search results are summarized in Table 4.1. This information was obtained via opt-in client-side instrumentation from users of a major web search engine. We include the traditional implicit feedback features such as clickthrough counts for the results, as well as our novel derived features such as the deviation of the observed clickthrough number for a given query-URL pair from the expected number of clicks on a result in the given position. We also model the browsing behavior after a result was clicked - e.g., the average page dwell time for a given query-URL pair, as well as its deviation from the expected (average) dwell time. Furthermore, the feature set was designed to provide essential information about the user experience to make feedback interpretation robust. For example, web search users can often determine whether a result is relevant by looking at the result title, URL, and summary - in many cases, looking at the original document is not necessary. To model this aspect of user experience we include features such as overlap in words in title and words in query (TitleOverlap) and the fraction of words shared by the query and the result summary. Clickthrough features Position Position of the URL in Current ranking ClickFrequency Number of clicks for this query, URL pair ClickProbability Probability of a click for this query and URL ClickDeviation Deviation from expected click probability IsNextClicked 1 if clicked on next position, 0 otherwise IsPreviousClicked 1 if clicked on previous position, 0 otherwise IsClickAbove 1 if there is a click above, 0 otherwise IsClickBelow 1 if there is click below, 0 otherwise Browsing features TimeOnPage Page dwell time CumulativeTimeOnPage Cumulative time for all subsequent pages after search TimeOnDomain Cumulative dwell time for this domain TimeOnShortUrl Cumulative time on URL prefix, no parameters IsFollowedLink 1 if followed link to result, 0 otherwise IsExactUrlMatch 0 if aggressive normalization used, 1 otherwise IsRedirected 1 if initial URL same as final URL, 0 otherwise IsPathFromSearch 1 if only followed links after query, 0 otherwise ClicksFromSearch Number of hops to reach page from query AverageDwellTime Average time on page for this query DwellTimeDeviation Deviation from average dwell time on page CumulativeDeviation Deviation from average cumulative dwell time DomainDeviation Deviation from average dwell time on domain Query-text features TitleOverlap Words shared between query and title SummaryOverlap Words shared between query and snippet QueryURLOverlap Words shared between query and URL QueryDomainOverlap Words shared between query and URL domain QueryLength Number of tokens in query QueryNextOverlap Fraction of words shared with next query Table 4.1: Some features used to represent post-search navigation history for a given query and search result URL. Having described our feature set, we briefly review our general method for deriving a user behavior model. 4.2 Deriving a User Feedback Model To learn to interpret the observed user behavior, we correlate user actions (i.e., the features in Table 4.1 representing the actions) with the explicit user judgments for a set of training queries. We find all the instances in our session logs where these queries were submitted to the search engine, and aggregate the user behavior features for all search sessions involving these queries. Each observed query-URL pair is represented by the features in Table 4.1, with values averaged over all search sessions, and assigned one of six possible relevance labels, ranging from Perfect to Bad, as assigned by explicit relevance judgments. These labeled feature vectors are used as input to the RankNet training algorithm (Section 3.3) which produces a trained user behavior model. This approach is particularly attractive as it does not require heuristics beyond feature engineering. The resulting user behavior model is used to help rank web search resultseither directly or in combination with other features, as described below. 5. EXPERIMENTAL SETUP The ultimate goal of incorporating implicit feedback into ranking is to improve the relevance of the returned web search results. Hence, we compare the ranking methods over a large set of judged queries with explicit relevance labels provided by human judges. In order for the evaluation to be realistic we obtained a random sample of queries from web search logs of a major search engine, with associated results and traces for user actions. We describe this dataset in detail next. Our metrics are described in Section 5.2 that we use to evaluate the ranking alternatives, listed in Section 5.3 in the experiments of Section 6. 5.1 Datasets We compared our ranking methods over a random sample of 3,000 queries from the search engine query logs. The queries were drawn from the logs uniformly at random by token without replacement, resulting in a query sample representative of the overall query distribution. On average, 30 results were explicitly labeled by human judges using a six point scale ranging from Perfect down to Bad. Overall, there were over 83,000 results with explicit relevance judgments. In order to compute various statistics, documents with label Good or better will be considered relevant, and with lower labels to be non-relevant. Note that the experiments were performed over the results already highly ranked by a web search engine, which corresponds to a typical user experience which is limited to the small number of the highly ranked results for a typical web search query. The user interactions were collected over a period of 8 weeks using voluntary opt-in information. In total, over 1.2 million unique queries were instrumented, resulting in over 12 million individual interactions with the search engine. The data consisted of user interactions with the web search engine (e.g., clicking on a result link, going back to search results, etc.) performed after a query was submitted. These actions were aggregated across users and search sessions and converted to features in Table 4.1. To create the training, validation, and test query sets, we created three different random splits of 1,500 training, 500 validation, and 1000 test queries. The splits were done randomly by query, so that there was no overlap in training, validation, and test queries. 5.2 Evaluation Metrics We evaluate the ranking algorithms over a range of accepted information retrieval metrics, namely Precision at K (P(K)), Normalized Discounted Cumulative Gain (NDCG), and Mean Average Precision (MAP). Each metric focuses on a deferent aspect of system performance, as we describe below. • Precision at K: As the most intuitive metric, P(K) reports the fraction of documents ranked in the top K results that are labeled as relevant. In our setting, we require a relevant document to be labeled Good or higher. The position of relevant documents within the top K is irrelevant, and hence this metric measure overall user satisfaction with the top K results. • NDCG at K: NDCG is a retrieval measure devised specifically for web search evaluation [10]. For a given query q, the ranked results are examined from the top ranked down, and the NDCG computed as: = +−= K j jr qq jMN 1 )( )1log(/)12( Where Mq is a normalization constant calculated so that a perfect ordering would obtain NDCG of 1; and each r(j) is an integer relevance label (0=Bad and 5=Perfect) of result returned at position j. Note that unlabeled and Bad documents do not contribute to the sum, but will reduce NDCG for the query pushing down the relevant labeled documents, reducing their contributions. NDCG is well suited to web search evaluation, as it rewards relevant documents in the top ranked results more heavily than those ranked lower. • MAP: Average precision for each query is defined as the mean of the precision at K values computed after each relevant document was retrieved. The final MAP value is defined as the mean of average precisions of all queries in the test set. This metric is the most commonly used single-value summary of a run over a set of queries. 5.3 Ranking Methods Compared Recall that our goal is to quantify the effectiveness of implicit behavior for real web search. One dimension is to compare the utility of implicit feedback with other information available to a web search engine. Specifically, we compare effectiveness of implicit user behaviors with content-based matching, static page quality features, and combinations of all features. • BM25F: As a strong web search baseline we used the BM25F scoring, which was used in one of the best performing systems in the TREC 2004 Web track [23,27]. BM25F and its variants have been extensively described and evaluated in IR literature, and hence serve as a strong, reproducible baseline. The BM25F variant we used for our experiments computes separate match scores for each field for a result document (e.g., body text, title, and anchor text), and incorporates query-independent linkbased information (e.g., PageRank, ClickDistance, and URL depth). The scoring function and field-specific tuning is described in detail in [23]. Note that BM25F does not directly consider explicit or implicit feedback for tuning. • RN: The ranking produced by a neural net ranker (RankNet, described in Section 3.3) that learns to rank web search results by incorporating BM25F and a large number of additional static and dynamic features describing each search result. This system automatically learns weights for all features (including the BM25F score for a document) based on explicit human labels for a large set of queries. A system incorporating an implementation of RankNet is currently in use by a major search engine and can be considered representative of the state of the art in web search. • BM25F-RerankCT: The ranking produced by incorporating clickthrough statistics to reorder web search results ranked by BM25F above. Clickthrough is a particularly important special case of implicit feedback, and has been shown to correlate with result relevance. This is a special case of the ranking method in Section 3.1, with the weight wI set to 1000 and the ranking Id is simply the number of clicks on the result corresponding to d. In effect, this ranking brings to the top all returned web search results with at least one click (and orders them in decreasing order by number of clicks). The relative ranking of the remainder of results is unchanged and they are inserted below all clicked results. This method serves as our baseline implicit feedback reranking method. BM25F-RerankAll The ranking produced by reordering the BM25F results using all user behavior features (Section 4). This method learns a model of user preferences by correlating feature values with explicit relevance labels using the RankNet neural net algorithm (Section 4.2). At runtime, for a given query the implicit score Ir is computed for each result r with available user interaction features, and the implicit ranking is produced. The merged ranking is computed as described in Section 3.1. Based on the experiments over the development set we fix the value of wI to 3 (the effect of the wI parameter for this ranker turned out to be negligible). • BM25F+All: Ranking derived by training the RankNet (Section 3.3) learner over the features set of the BM25F score as well as all implicit feedback features (Section 3.2). We used the 2-layer implementation of RankNet [5] trained on the queries and labels in the training and validation sets. • RN+All: Ranking derived by training the 2-layer RankNet ranking algorithm (Section 3.3) over the union of all content, dynamic, and implicit feedback features (i.e., all of the features described above as well as all of the new implicit feedback features we introduced). The ranking methods above span the range of the information used for ranking, from not using the implicit or explicit feedback at all (i.e., BM25F) to a modern web search engine using hundreds of features and tuned on explicit judgments (RN). As we will show next, incorporating user behavior into these ranking systems dramatically improves the relevance of the returned documents. 6. EXPERIMENTAL RESULTS Implicit feedback for web search ranking can be exploited in a number of ways. We compare alternative methods of exploiting implicit feedback, both by re-ranking the top results (i.e., the BM25F-RerankCT and BM25F-RerankAll methods that reorder BM25F results), as well as by integrating the implicit features directly into the ranking process (i.e., the RN+ALL and BM25F+All methods which learn to rank results over the implicit feedback and other features). We compare our methods over strong baselines (BM25F and RN) over the NDCG, Precision at K, and MAP measures defined in Section 5.2. The results were averaged over three random splits of the overall dataset. Each split contained 1500 training, 500 validation, and 1000 test queries, all query sets disjoint. We first present the results over all 1000 test queries (i.e., including queries for which there are no implicit measures so we use the original web rankings). We then drill down to examine the effects on reranking for the attempted queries in more detail, analyzing where implicit feedback proved most beneficial. We first experimented with different methods of re-ranking the output of the BM25F search results. Figures 6.1 and 6.2 report NDCG and Precision for BM25F, as well as for the strategies reranking results with user feedback (Section 3.1). Incorporating all user feedback (either in reranking framework or as features to the learner directly) results in significant improvements (using two-tailed t-test with p=0.01) over both the original BM25F ranking as well as over reranking with clickthrough alone. The improvement is consistent across the top 10 results and largest for the top result: NDCG at 1 for BM25F+All is 0.622 compared to 0.518 of the original results, and precision at 1 similarly increases from 0.5 to 0.63. Based on these results we will use the direct feature combination (i.e., BM25F+All) ranker for subsequent comparisons involving implicit feedback. 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 1 2 3 4 5 6 7 8 9 10K NDCG BM25 BM25-Rerank-CT BM25-Rerank-All BM25+All Figure 6.1: NDCG at K for BM25F, BM25F-RerankCT, BM25F-Rerank-All, and BM25F+All for varying K 0.35 0.4 0.45 0.5 0.55 0.6 0.65 1 3 5 10 K Precision BM25 BM25-Rerank-CT BM25-Rerank-All BM25+All Figure 6.2: Precision at K for BM25F, BM25F-RerankCT, BM25F-Rerank-All, and BM25F+All for varying K Interestingly, using clickthrough alone, while giving significant benefit over the original BM25F ranking, is not as effective as considering the full set of features in Table 4.1. While we analyze user behavior (and most effective component features) in a separate paper [1], it is worthwhile to give a concrete example of the kind of noise inherent in real user feedback in web search setting. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 5 Result position Relativeclickfrequency PTR=2 PTR=3 PTR=5 Figure 6.3: Relative clickthrough frequency for queries with varying Position of Top Relevant result (PTR). If users considered only the relevance of a result to their query, they would click on the topmost relevant results. Unfortunately, as Joachims and others have shown, presentation also influences which results users click on quite dramatically. Users often click on results above the relevant one presumably because the short summaries do not provide enough information to make accurate relevance assessments and they have learned that on average topranked items are relevant. Figure 6.3 shows relative clickthrough frequencies for queries with known relevant items at positions other than the first position; the position of the top relevant result (PTR) ranges from 2-10 in the figure. For example, for queries with first relevant result at position 5 (PTR=5), there are more clicks on the non-relevant results in higher ranked positions than on the first relevant result at position 5. As we will see, learning over a richer behavior feature set, results in substantial accuracy improvement over clickthrough alone. We now consider incorporating user behavior into a much richer feature set, RN (Section 5.3) used by a major web search engine. RN incorporates BM25F, link-based features, and hundreds of other features. Figure 6.4 reports NDCG at K and Figure 6.5 reports Precision at K. Interestingly, while the original RN rankings are significantly more accurate than BM25F alone, incorporating implicit feedback features (BM25F+All) results in ranking that significantly outperforms the original RN rankings. In other words, implicit feedback incorporates sufficient information to replace the hundreds of other features available to the RankNet learner trained on the RN feature set. 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 0.7 1 2 3 4 5 6 7 8 9 10K NDCG RN RN+All BM25 BM25+All Figure 6.4: NDCG at K for BM25F, BM25F+All, RN, and RN+All for varying K Furthermore, enriching the RN features with implicit feedback set exhibits significant gain on all measures, allowing RN+All to outperform all other methods. This demonstrates the complementary nature of implicit feedback with other features available to a state of the art web search engine. 0.4 0.45 0.5 0.55 0.6 0.65 1 3 5 10 K Precision RN RN+All BM25 BM25+All Figure 6.5: Precision at K for BM25F, BM25F+All, RN, and RN+All for varying K We summarize the performance of the different ranking methods in Table 6.1. We report the Mean Average Precision (MAP) score for each system. While not intuitive to interpret, MAP allows quantitative comparison on a single metric. The gains marked with * are significant at p=0.01 level using two tailed t-test. MAP Gain P(1) Gain BM25F 0.184 - 0.503BM25F-Rerank-CT 0.215 0.031* 0.577 0.073* BM25F-RerankImplicit 0.218 0.003 0.605 0.028* BM25F+Implicit 0.222 0.004 0.620 0.015* RN 0.215 - 0.597RN+All 0.248 0.033* 0.629 0.032* Table 6.1: Mean Average Precision (MAP) for all strategies. So far we reported results averaged across all queries in the test set. Unfortunately, less than half had sufficient interactions to attempt reranking. Out of the 1000 queries in test, between 46% and 49%, depending on the train-test split, had sufficient interaction information to make predictions (i.e., there was at least 1 search session in which at least 1 result URL was clicked on by the user). This is not surprising: web search is heavy-tailed, and there are many unique queries. We now consider the performance on the queries for which user interactions were available. Figure 6.6 reports NDCG for the subset of the test queries with the implicit feedback features. The gains at top 1 are dramatic. The NDCG at 1 of BM25F+All increases from 0.6 to 0.75 (a 31% relative gain), achieving performance comparable to RN+All operating over a much richer feature set. 0.6 0.65 0.7 0.75 0.8 1 3 5 10K NDCG RN RN+All BM25 BM25+All Figure 6.6: NDCG at K for BM25F, BM25F+All, RN, and RN+All on test queries with user interactions Similarly, gains on precision at top 1 are substantial (Figure 6.7), and are likely to be apparent to web search users. When implicit feedback is available, the BM25F+All system returns relevant document at top 1 almost 70% of the time, compared 53% of the time when implicit feedback is not considered by the original BM25F system. 0.45 0.5 0.55 0.6 0.65 0.7 1 3 5 10K Precision RN RN+All BM25 BM25+All Figure 6.7: Precision at K NDCG at K for BM25F, BM25F+All, RN, and RN+All on test queries with user interactions We summarize the results on the MAP measure for attempted queries in Table 6.2. MAP improvements are both substantial and significant, with improvements over the BM25F ranker most pronounced. Method MAP Gain P(1) Gain RN 0.269 0.632 RN+All 0.321 0.051 (19%) 0.693 0.061(10%) BM25F 0.236 0.525 BM25F+All 0.292 0.056 (24%) 0.687 0.162 (31%) Table 6.2: Mean Average Precision (MAP) on attempted queries for best performing methods We now analyze the cases where implicit feedback was shown most helpful. Figure 6.8 reports the MAP improvements over the baseline BM25F run for each query with MAP under 0.6. Note that most of the improvement is for poorly performing queries (i.e., MAP < 0.1). Interestingly, incorporating user behavior information degrades accuracy for queries with high original MAP score. One possible explanation is that these easy queries tend to be navigational (i.e., having a single, highly-ranked most appropriate answer), and user interactions with lower-ranked results may indicate divergent information needs that are better served by the less popular results (with correspondingly poor overall relevance ratings). 0 50 100 150 200 250 300 350 0.1 0.2 0.3 0.4 0.5 0.6 -0.4 -0.35 -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 Frequency Average Gain Figure 6.8: Gain of BM25F+All over original BM25F ranking To summarize our experimental results, incorporating implicit feedback in real web search setting resulted in significant improvements over the original rankings, using both BM25F and RN baselines. Our rich set of implicit features, such as time on page and deviations from the average behavior, provides advantages over using clickthrough alone as an indicator of interest. Furthermore, incorporating implicit feedback features directly into the learned ranking function is more effective than using implicit feedback for reranking. The improvements observed over large test sets of queries (1,000 total, between 466 and 495 with implicit feedback available) are both substantial and statistically significant. 7. CONCLUSIONS AND FUTURE WORK In this paper we explored the utility of incorporating noisy implicit feedback obtained in a real web search setting to improve web search ranking. We performed a large-scale evaluation over 3,000 queries and more than 12 million user interactions with a major search engine, establishing the utility of incorporating noisy implicit feedback to improve web search relevance. We compared two alternatives of incorporating implicit feedback into the search process, namely reranking with implicit feedback and incorporating implicit feedback features directly into the trained ranking function. Our experiments showed significant improvement over methods that do not consider implicit feedback. The gains are particularly dramatic for the top K=1 result in the final ranking, with precision improvements as high as 31%, and the gains are substantial for all values of K. Our experiments showed that implicit user feedback can further improve web search performance, when incorporated directly with popular content- and link-based features. Interestingly, implicit feedback is particularly valuable for queries with poor original ranking of results (e.g., MAP lower than 0.1). One promising direction for future work is to apply recent research on automatically predicting query difficulty, and only attempt to incorporate implicit feedback for the difficult queries. As another research direction we are exploring methods for extending our predictions to the previously unseen queries (e.g., query clustering), which should further improve the web search experience of users. ACKNOWLEDGMENTS We thank Chris Burges and Matt Richardson for an implementation of RankNet for our experiments. We also thank Robert Ragno for his valuable suggestions and many discussions. 8. REFERENCES [1] E. Agichtein, E. Brill, S. Dumais, and R.Ragno, Learning User Interaction Models for Predicting Web Search Result Preferences. In Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2006 [2] J. Allan, HARD Track Overview in TREC 2003, High Accuracy Retrieval from Documents, 2003 [3] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval, Addison-Wesley, 1999. [4] S. Brin and L. Page, The Anatomy of a Large-scale Hypertextual Web Search Engine, in Proceedings of WWW, 1997 [5] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, G. Hullender, Learning to Rank using Gradient Descent, in Proceedings of the International Conference on Machine Learning, 2005 [6] D.M. Chickering, The WinMine Toolkit, Microsoft Technical Report MSR-TR-2002-103, 2002 [7] M. Claypool, D. Brown, P. Lee and M. Waseda. Inferring user interest. IEEE Internet Computing. 2001 [8] S. Fox, K. Karnawat, M. Mydland, S. T. Dumais and T. White. Evaluating implicit measures to improve the search experience. In ACM Transactions on Information Systems, 2005 [9] J. Goecks and J. Shavlick. Learning users'' interests by unobtrusively observing their normal behavior. In Proceedings of the IJCAI Workshop on Machine Learning for Information Filtering. 1999. [10] K Jarvelin and J. Kekalainen. IR evaluation methods for retrieving highly relevant documents. In Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2000 [11] T. Joachims, Optimizing Search Engines Using Clickthrough Data. In Proceedings of the ACM Conference on Knowledge Discovery and Datamining (SIGKDD), 2002 [12] T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G. Gay, Accurately Interpreting Clickthrough Data as Implicit Feedback, Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2005 [13] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in Kernel Methods, in Support Vector Learning, MIT Press, 1999 [14] D. Kelly and J. Teevan, Implicit feedback for inferring user preference: A bibliography. In SIGIR Forum, 2003 [15] J. Konstan, B. Miller, D. Maltz, J. Herlocker, L. Gordon, and J. Riedl. GroupLens: Applying collaborative filtering to usenet news. In Communications of ACM, 1997. [16] M. Morita, and Y. Shinoda, Information filtering based on user behavior analysis and best match text retrieval. Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 1994 [17] D. Oard and J. Kim. Implicit feedback for recommender systems. In Proceedings of the AAAI Workshop on Recommender Systems. 1998 [18] D. Oard and J. Kim. Modeling information content using observable behavior. In Proceedings of the 64th Annual Meeting of the American Society for Information Science and Technology. 2001 [19] N. Pharo, N. and K. Järvelin. The SST method: a tool for analyzing web information search processes. In Information Processing & Management, 2004 [20] P. Pirolli, The Use of Proximal Information Scent to Forage for Distal Content on the World Wide Web. In Working with Technology in Mind: Brunswikian. Resources for Cognitive Science and Engineering, Oxford University Press, 2004 [21] F. Radlinski and T. Joachims, Query Chains: Learning to Rank from Implicit Feedback. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (SIGKDD), 2005. [22] F. Radlinski and T. Joachims, Evaluating the Robustness of Learning from Implicit Feedback, in Proceedings of the ICML Workshop on Learning in Web Search, 2005 [23] S. E. Robertson, H. Zaragoza, and M. Taylor, Simple BM25 extension to multiple weighted fields, in Proceedings of the Conference on Information and Knowledge Management (CIKM), 2004 [24] G. Salton & M. McGill. Introduction to modern information retrieval. McGraw-Hill, 1983 [25] E.M. Voorhees, D. Harman, Overview of TREC, 2001 [26] G.R. Xue, H.J. Zeng, Z. Chen, Y. Yu, W.Y. Ma, W.S. Xi, and W.G. Fan, Optimizing web search using web clickthrough data, in Proceedings of the Conference on Information and Knowledge Management (CIKM), 2004 [27] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and S. Robertson. Microsoft Cambridge at TREC 13: Web and Hard Tracks. In Proceedings of TREC 2004
Improving Web Search Ranking by Incorporating User Behavior Information ABSTRACT We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance. 1. INTRODUCTION Millions of users interact with search engines daily. They issue queries, follow some of the links in the results, click on ads, spend time on pages, reformulate their queries, and perform other actions. These interactions can serve as a valuable source of information for tuning and improving web search result ranking and can compliment more costly explicit judgments. Implicit relevance feedback for ranking and personalization has become an active area of research. Recent work by Joachims and others exploring implicit feedback in controlled environments have shown the value of incorporating implicit feedback into the ranking process. Our motivation for this work is to understand how implicit feedback can be used in a large-scale operational environment to improve retrieval. How does it compare to and compliment evidence from page content, anchor text, or link-based features such as inlinks or PageRank? While it is intuitive that user interactions with the web search engine should reveal at least some information that could be used for ranking, estimating user preferences in real web search settings is a challenging problem, since real user interactions tend to be more "noisy" than commonly assumed in the controlled settings of previous studies. Our paper explores whether implicit feedback can be helpful in realistic environments, where user feedback can be noisy (or adversarial) and a web search engine already uses hundreds of features and is heavily tuned. To this end, we explore different approaches for ranking web search results using real user behavior obtained as part of normal interactions with the web search engine. The specific contributions of this paper include: • Analysis of alternatives for incorporating user behavior into web search ranking (Section 3). • An application of a robust implicit feedback model derived from mining millions of user interactions with a major web search engine (Section 4). • A large scale evaluation over real user queries and search results, showing significant improvements derived from incorporating user feedback (Section 6). We summarize our findings and discuss extensions to the current work in Section 7, which concludes the paper. 2. BACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval. Most common approaches primarily focus on similarity of query and a page, as well as the overall page quality [3,4,24]. However, with increasing popularity of search engines, implicit feedback (i.e., the actions users take when interacting with the search engine) can be used to improve the rankings. Implicit relevance measures have been studied by several research groups. An overview of implicit measures is compiled in Kelly and Teevan [14]. This research, while developing valuable insights into implicit relevance measures, was not applied to improve the ranking of web search results in realistic settings. Closely related to our work, Joachims [11] collected implicit measures in place of explicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions. Fox et al. [8] explored the relationship between implicit and explicit measures in Web search, and developed Bayesian models to correlate implicit measures and explicit relevance judgments for both individual queries and search sessions. This work considered a wide range of user behaviors (e.g., dwell time, scroll time, reformulation patterns) in addition to the popular clickthrough behavior. However, the modeling effort was aimed at predicting explicit relevance judgments from implicit user actions and not specifically at learning ranking functions. Other studies of user behavior in web search include Pharo and Järvelin [19], but were not directly applied to improve ranking. More recently, Joachims et al. [12] presented an empirical evaluation of interpreting clickthrough evidence. By performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthroughs in a controlled, laboratory setting. Unfortunately, the extent to which previous research applies to real-world web search is unclear. At the same time, while recent work (e.g., [26]) on using clickthrough information for improving web search ranking is promising, it captures only one aspect of the user interactions with web search engines. We build on existing research to develop robust user behavior interpretation techniques for the real web search setting. Instead of treating each user as a reliable "expert", we aggregate information from multiple, unreliable, user search session traces, as we describe in the next two sections. 3. INCORPORATING IMPLICIT FEEDBACK We consider two complementary approaches to ranking with implicit feedback: (1) treating implicit feedback as independent evidence for ranking results, and (2) integrating implicit feedback features directly into the ranking algorithm. We describe the two general ranking approaches next. The specific implicit feedback features are described in Section 4, and the algorithms for interpreting and incorporating implicit feedback are described in Section 5. 3.1 Implicit Feedback as Independent Evidence The general approach is to re-rank the results obtained by a web search engine according to observed clickthrough and other user interactions for the query in previous search sessions. Each result is assigned a score according to expected relevance/user satisfaction based on previous interactions, resulting in some preference ordering based on user interactions alone. While there has been significant work on merging multiple rankings, we adapt a simple and robust approach of ignoring the original rankers' scores, and instead simply merge the rank orders. The main reason for ignoring the original scores is that since the feature spaces and learning algorithms are different, the scores are not directly comparable, and re-normalization tends to remove the benefit of incorporating classifier scores. We experimented with a variety of merging functions on the development set of queries (and using a set of interactions from a different time period from final evaluation sets). We found that a simple rank merging heuristic combination works well, and is robust to variations in score values from original rankers. For a given query q, the implicit score ISd is computed for each result d from available user interaction features, resulting in the implicit rank Id for each result. We compute a merged score SM (d) for d by combining the ranks obtained from implicit feedback, Id with the original rank of d, Od: where the weight wI is a heuristically tuned scaling factor representing the relative "importance" of the implicit feedback. The query results are ordered in by decreasing values of SM to produce the final ranking. One special case of this model arises when setting wI to a very large value, effectively forcing clicked results to be ranked higher than un-clicked results--an intuitive and effective heuristic that we will use as a baseline. Applying more sophisticated classifier and ranker combination algorithms may result in additional improvements, and is a promising direction for future work. The approach above assumes that there are no interactions between the underlying features producing the original web search ranking and the implicit feedback features. We now relax this assumption by integrating implicit feedback features directly into the ranking process. 3.2 Ranking with Implicit Feedback Features Modern web search engines rank results based on a large number of features, including content-based features (i.e., how closely a query matches the text or title or anchor text of the document), and queryindependent page quality features (e.g., PageRank of the document or the domain). In most cases, automatic (or semi-automatic) methods are developed for tuning the specific ranking function that combines these feature values. Hence, a natural approach is to incorporate implicit feedback features directly as features for the ranking algorithm. During training or tuning, the ranker can be tuned as before but with additional features. At runtime, the search engine would fetch the implicit feedback features associated with each query-result URL pair. This model requires a ranking algorithm to be robust to missing values: more than 50% of queries to web search engines are unique, with no previous implicit feedback available. We now describe such a ranker that we used to learn over the combined feature sets including implicit feedback. 3.3 Learning to Rank Web Search Results A key aspect of our approach is exploiting recent advances in machine learning, namely trainable ranking algorithms for web search and information retrieval (e.g., [5, 11] and classical results reviewed in [3]). In our setting, explicit human relevance judgments (labels) are available for a set of web search queries and results. Hence, an attractive choice to use is a supervised machine learning technique to learn a ranking function that best predicts relevance judgments. RankNet is one such algorithm. It is a neural net tuning algorithm that optimizes feature weights to best match explicitly provided pairwise user preferences. While the specific training algorithms used by RankNet are beyond the scope of this paper, it is described in detail in [5] and includes extensive evaluation and comparison with other ranking methods. An attractive feature of RankNet is both train - and run-time efficiency--runtime ranking can be quickly computed and can scale to the web, and training can be done over thousands of queries and associated judged results. We use a 2-layer implementation of RankNet in order to model non-linear relationships between features. Furthermore, RankNet can learn with many (differentiable) cost functions, and hence can automatically learn a ranking function from human-provided labels, an attractive alternative to heuristic feature combination techniques. Hence, we will also use RankNet as a generic ranker to explore the contribution of implicit feedback for different ranking alternatives. 4. IMPLICIT USER FEEDBACK MODEL Our goal is to accurately interpret noisy user feedback obtained as by tracing user interactions with the search engine. Interpreting implicit feedback in real web search setting is not an easy task. We characterize this problem in detail in [1], where we motivate and evaluate a wide variety of models of implicit user activities. The general approach is to represent user actions for each search result as a vector of features, and then train a ranker on these features to discover feature values indicative of relevant (and non-relevant) search results. We first briefly summarize our features and model, and the learning approach (Section 4.2) in order to provide sufficient information to replicate our ranking methods and the subsequent experiments. 4.1 Representing User Actions as Features We model observed web search behaviors as a combination of a ̏background" component (i.e., query - and relevance-independent noise in user behavior, including positional biases with result interactions), and a ̏relevance" component (i.e., query-specific behavior indicative of relevance of a result to a query). We design our features to take advantage of aggregated user behavior. The feature set is comprised of directly observed features (computed directly from observations for each query), as well as query-specific derived features, computed as the deviation from the overall queryindependent distribution of values for the corresponding directly observed feature values. The features used to represent user interactions with web search results are summarized in Table 4.1. This information was obtained via opt-in client-side instrumentation from users of a major web search engine. We include the traditional implicit feedback features such as clickthrough counts for the results, as well as our novel derived features such as the deviation of the observed clickthrough number for a given query-URL pair from the expected number of clicks on a result in the given position. We also model the browsing behavior after a result was clicked--e.g., the average page dwell time for a given query-URL pair, as well as its deviation from the expected (average) dwell time. Furthermore, the feature set was designed to provide essential information about the user experience to make feedback interpretation robust. For example, web search users can often determine whether a result is relevant by looking at the result title, URL, and summary--in many cases, looking at the original document is not necessary. To model this aspect of user experience we include features such as overlap in words in title and words in query (TitleOverlap) and the fraction of words shared by the query and the result summary. Table 4.1: Some features used to represent post-search navigation history for a given query and search result URL. Having described our feature set, we briefly review our general method for deriving a user behavior model. 4.2 Deriving a User Feedback Model To learn to interpret the observed user behavior, we correlate user actions (i.e., the features in Table 4.1 representing the actions) with the explicit user judgments for a set of training queries. We find all the instances in our session logs where these queries were submitted to the search engine, and aggregate the user behavior features for all search sessions involving these queries. Each observed query-URL pair is represented by the features in Table 4.1, with values averaged over all search sessions, and assigned one of six possible relevance labels, ranging from "Perfect" to "Bad", as assigned by explicit relevance judgments. These labeled feature vectors are used as input to the RankNet training algorithm (Section 3.3) which produces a trained user behavior model. This approach is particularly attractive as it does not require heuristics beyond feature engineering. The resulting user behavior model is used to help rank web search results--either directly or in combination with other features, as described below. 5. EXPERIMENTAL SETUP The ultimate goal of incorporating implicit feedback into ranking is to improve the relevance of the returned web search results. Hence, we compare the ranking methods over a large set of judged queries with explicit relevance labels provided by human judges. In order for the evaluation to be realistic we obtained a random sample of queries from web search logs of a major search engine, with associated results and traces for user actions. We describe this dataset in detail next. Our metrics are described in Section 5.2 that we use to evaluate the ranking alternatives, listed in Section 5.3 in the experiments of Section 6. 5.1 Datasets We compared our ranking methods over a random sample of 3,000 queries from the search engine query logs. The queries were drawn from the logs uniformly at random by token without replacement, resulting in a query sample representative of the overall query distribution. On average, 30 results were explicitly labeled by human judges using a six point scale ranging from "Perfect" down to "Bad". Overall, there were over 83,000 results with explicit relevance judgments. In order to compute various statistics, documents with label "Good" or better will be considered "relevant", and with lower labels to be "non-relevant". Note that the experiments were performed over the results already highly ranked by a web search engine, which corresponds to a typical user experience which is limited to the small number of the highly ranked results for a typical web search query. The user interactions were collected over a period of 8 weeks using voluntary opt-in information. In total, over 1.2 million unique queries were instrumented, resulting in over 12 million individual interactions with the search engine. The data consisted of user interactions with the web search engine (e.g., clicking on a result link, going back to search results, etc.) performed after a query was submitted. These actions were aggregated across users and search sessions and converted to features in Table 4.1. To create the training, validation, and test query sets, we created three different random splits of 1,500 training, 500 validation, and 1000 test queries. The splits were done randomly by query, so that there was no overlap in training, validation, and test queries. 5.2 Evaluation Metrics We evaluate the ranking algorithms over a range of accepted information retrieval metrics, namely Precision at K (P (K)), Normalized Discounted Cumulative Gain (NDCG), and Mean Average Precision (MAP). Each metric focuses on a deferent aspect of system performance, as we describe below. • Precision at K: As the most intuitive metric, P (K) reports the fraction of documents ranked in the top K results that are labeled as relevant. In our setting, we require a relevant document to be labeled "Good" or higher. The position of relevant documents within the top K is irrelevant, and hence this metric measure overall user satisfaction with the top K results. • NDCG at K: NDCG is a retrieval measure devised specifically for web search evaluation [10]. For a given query q, the ranked results are examined from the top ranked down, and the NDCG computed as: Where Mq is a normalization constant calculated so that a perfect ordering would obtain NDCG of 1; and each r (j) is an integer relevance label (0 =" Bad" and 5 =" Perfect") of result returned at position j. Note that unlabeled and "Bad" documents do not contribute to the sum, but will reduce NDCG for the query pushing down the relevant labeled documents, reducing their contributions. NDCG is well suited to web search evaluation, as it rewards relevant documents in the top ranked results more heavily than those ranked lower. • MAP: Average precision for each query is defined as the mean of the precision at K values computed after each relevant document was retrieved. The final MAP value is defined as the mean of average precisions of all queries in the test set. This metric is the most commonly used single-value summary of a run over a set of queries. 5.3 Ranking Methods Compared Recall that our goal is to quantify the effectiveness of implicit behavior for real web search. One dimension is to compare the utility of implicit feedback with other information available to a web search engine. Specifically, we compare effectiveness of implicit user behaviors with content-based matching, static page quality features, and combinations of all features. • BM25F: As a strong web search baseline we used the BM25F scoring, which was used in one of the best performing systems in the TREC 2004 Web track [23,27]. BM25F and its variants have been extensively described and evaluated in IR literature, and hence serve as a strong, reproducible baseline. The BM25F variant we used for our experiments computes separate match scores for each "field" for a result document (e.g., body text, title, and anchor text), and incorporates query-independent linkbased information (e.g., PageRank, ClickDistance, and URL depth). The scoring function and field-specific tuning is described in detail in [23]. Note that BM25F does not directly consider explicit or implicit feedback for tuning. • RN: The ranking produced by a neural net ranker (RankNet, described in Section 3.3) that learns to rank web search results by incorporating BM25F and a large number of additional static and dynamic features describing each search result. This system automatically learns weights for all features (including the BM25F score for a document) based on explicit human labels for a large set of queries. A system incorporating an implementation of RankNet is currently in use by a major search engine and can be considered representative of the state of the art in web search. • BM25F-RerankCT: The ranking produced by incorporating clickthrough statistics to reorder web search results ranked by BM25F above. Clickthrough is a particularly important special case of implicit feedback, and has been shown to correlate with result relevance. This is a special case of the ranking method in Section 3.1, with the weight wI set to 1000 and the ranking Id is simply the number of clicks on the result corresponding to d. In effect, this ranking brings to the top all returned web search results with at least one click (and orders them in decreasing order by number of clicks). The relative ranking of the remainder of results is unchanged and they are inserted below all clicked results. This method serves as our baseline implicit feedback reranking method. BM25F-RerankAll The ranking produced by reordering the BM25F results using all user behavior features (Section 4). This method learns a model of user preferences by correlating feature values with explicit relevance labels using the RankNet neural net algorithm (Section 4.2). At runtime, for a given query the implicit score Ir is computed for each result r with available user interaction features, and the implicit ranking is produced. The merged ranking is computed as described in Section 3.1. Based on the experiments over the development set we fix the value of wI to 3 (the effect of the wI parameter for this ranker turned out to be negligible). • BM25F + All: Ranking derived by training the RankNet (Section 3.3) learner over the features set of the BM25F score as well as all implicit feedback features (Section 3.2). We used the 2-layer implementation of RankNet [5] trained on the queries and labels in the training and validation sets. • RN+A ll: Ranking derived by training the 2-layer RankNet ranking algorithm (Section 3.3) over the union of all content, dynamic, and implicit feedback features (i.e., all of the features described above as well as all of the new implicit feedback features we introduced). The ranking methods above span the range of the information used for ranking, from not using the implicit or explicit feedback at all (i.e., BM25F) to a modern web search engine using hundreds of features and tuned on explicit judgments (RN). As we will show next, incorporating user behavior into these ranking systems dramatically improves the relevance of the returned documents. 6. EXPERIMENTAL RESULTS Implicit feedback for web search ranking can be exploited in a number of ways. We compare alternative methods of exploiting implicit feedback, both by re-ranking the top results (i.e., the BM25F-RerankCT and BM25F-RerankAll methods that reorder BM25F results), as well as by integrating the implicit features directly into the ranking process (i.e., the RN+ALL and BM25F + All methods which learn to rank results over the implicit feedback and other features). We compare our methods over strong baselines (BM25F and RN) over the NDCG, Precision at K, and MAP measures defined in Section 5.2. The results were averaged over three random splits of the overall dataset. Each split contained 1500 training, 500 validation, and 1000 test queries, all query sets disjoint. We first present the results over all 1000 test queries (i.e., including queries for which there are no implicit measures so we use the original web rankings). We then drill down to examine the effects on reranking for the attempted queries in more detail, analyzing where implicit feedback proved most beneficial. We first experimented with different methods of re-ranking the output of the BM25F search results. Figures 6.1 and 6.2 report NDCG and Precision for BM25F, as well as for the strategies reranking results with user feedback (Section 3.1). Incorporating all user feedback (either in reranking framework or as features to the learner directly) results in significant improvements (using twotailed t-test with p = 0.01) over both the original BM25F ranking as well as over reranking with clickthrough alone [Rev1]. The improvement is consistent across the top 10 results and largest for the top result: NDCG at 1 for BM25F + All is 0.622 compared to 0.518 of the original results, and precision at 1 similarly increases from 0.5 to 0.63. Based on these results we will use the direct feature combination (i.e., BM25F + All) ranker for subsequent comparisons involving implicit feedback. Figure 6.1: NDCG at K for BM25F, BM25F-RerankCT, BM25F-Rerank-All, and BM25F + All for varying K Figure 6.2: Precision at K for BM25F, BM25F-RerankCT, BM25F-Rerank-All, and BM25F + All for varying K Interestingly, using clickthrough alone, while giving significant benefit over the original BM25F ranking, is not as effective as considering the full set of features in Table 4.1. While we analyze user behavior (and most effective component features) in a separate paper [1], it is worthwhile to give a concrete example of the kind of noise inherent in real user feedback in web search setting. Figure 6.3: Relative clickthrough frequency for queries with varying Position of Top Relevant result (PTR). If users considered only the relevance of a result to their query, they would click on the topmost relevant results. Unfortunately, as Joachims and others have shown, presentation also influences which results users click on quite dramatically. Users often click on results above the relevant one presumably because the short summaries do not provide enough information to make accurate relevance assessments and they have learned that on average topranked items are relevant. Figure 6.3 shows relative clickthrough frequencies for queries with known relevant items at positions other than the first position; the position of the top relevant result (PTR) ranges from 2-10 in the figure. For example, for queries with first relevant result at position 5 (PTR = 5), there are more clicks on the non-relevant results in higher ranked positions than on the first relevant result at position 5. As we will see, learning over a richer behavior feature set, results in substantial accuracy improvement over clickthrough alone [Rev2]. We now consider incorporating user behavior into a much richer feature set, RN (Section 5.3) used by a major web search engine. RN incorporates BM25F, link-based features, and hundreds of other features. Figure 6.4 reports NDCG at K and Figure 6.5 reports Precision at K. Interestingly, while the original RN rankings are significantly more accurate than BM25F alone, incorporating implicit feedback features (BM25F + All) results in ranking that significantly outperforms the original RN rankings. In other words, implicit feedback incorporates sufficient information to replace the hundreds of other features available to the RankNet learner trained on the RN feature set. Figure 6.4: NDCG at% for BM25F, BM25F + All, RN, and RN+A ll for varying% Furthermore, enriching the RN features with implicit feedback set exhibits significant gain on all measures, allowing RN+A ll to outperform all other methods. This demonstrates the complementary nature of implicit feedback with other features available to a state of the art web search engine. Figure 6.5: Precision at% for BM25F, BM25F + All, RN, and RN+A ll for varying% We summarize the performance of the different ranking methods in Table 6.1. We report the Mean Average Precision (MAP) score for each system. While not intuitive to interpret, MAP allows quantitative comparison on a single metric. The gains marked with * are significant at p = 0.01 level using two tailed t-test. Table 6.1: Mean Average Precision (MAP) for all strategies. So far we reported results averaged across all queries in the test set. Unfortunately, less than half had sufficient interactions to attempt reranking. Out of the 1000 queries in test, between 46% and 49%, depending on the train-test split, had sufficient interaction information to make predictions (i.e., there was at least 1 search session in which at least 1 result URL was clicked on by the user). This is not surprising: web search is heavy-tailed, and there are many unique queries. We now consider the performance on the queries for which user interactions were available. Figure 6.6 reports NDCG for the subset of the test queries with the implicit feedback features. The gains at top 1 are dramatic. The NDCG at 1 of BM25F + All increases from 0.6 to 0.75 (a 31% relative gain), achieving performance comparable to RN+A ll operating over a much richer feature set. Figure 6.6: NDCG at K for BM25F, BM25F + All, RN, and RN+A ll on test queries with user interactions Similarly, gains on precision at top 1 are substantial (Figure 6.7), and are likely to be apparent to web search users. When implicit feedback is available, the BM25F + All system returns relevant document at top 1 almost 70% of the time, compared 53% of the time when implicit feedback is not considered by the original BM25F system. Figure 6.7: Precision at K NDCG at K for BM25F, BM25F + All, RN, and RN+A ll on test queries with user interactions We summarize the results on the MAP measure for attempted queries in Table 6.2. MAP improvements are both substantial and significant, with improvements over the BM25F ranker most pronounced. Table 6.2: Mean Average Precision (MAP) on attempted queries for best performing methods We now analyze the cases where implicit feedback was shown most helpful. Figure 6.8 reports the MAP improvements over the "baseline" BM25F run for each query with MAP under 0.6. Note that most of the improvement is for poorly performing queries (i.e., MAP <0.1). Interestingly, incorporating user behavior information degrades accuracy for queries with high original MAP score. One possible explanation is that these "easy" queries tend to be navigational (i.e., having a single, highly-ranked most appropriate answer), and user interactions with lower-ranked results may indicate divergent information needs that are better served by the less popular results (with correspondingly poor overall relevance ratings). Figure 6.8: Gain of BM25F + All over original BM25F ranking To summarize our experimental results, incorporating implicit feedback in real web search setting resulted in significant improvements over the original rankings, using both BM25F and RN baselines. Our rich set of implicit features, such as time on page and deviations from the average behavior, provides advantages over using clickthrough alone as an indicator of interest. Furthermore, incorporating implicit feedback features directly into the learned ranking function is more effective than using implicit feedback for reranking. The improvements observed over large test sets of queries (1,000 total, between 466 and 495 with implicit feedback available) are both substantial and statistically significant. 7. CONCLUSIONS AND FUTURE WORK In this paper we explored the utility of incorporating noisy implicit feedback obtained in a real web search setting to improve web search ranking. We performed a large-scale evaluation over 3,000 queries and more than 12 million user interactions with a major search engine, establishing the utility of incorporating "noisy" implicit feedback to improve web search relevance. We compared two alternatives of incorporating implicit feedback into the search process, namely reranking with implicit feedback and incorporating implicit feedback features directly into the trained ranking function. Our experiments showed significant improvement over methods that do not consider implicit feedback. The gains are particularly dramatic for the top K = 1 result in the final ranking, with precision improvements as high as 31%, and the gains are substantial for all values of K. Our experiments showed that implicit user feedback can further improve web search performance, when incorporated directly with popular content - and link-based features. Interestingly, implicit feedback is particularly valuable for queries with poor original ranking of results (e.g., MAP lower than 0.1). One promising direction for future work is to apply recent research on automatically predicting query difficulty, and only attempt to incorporate implicit feedback for the "difficult" queries. As another research direction we are exploring methods for extending our predictions to the previously unseen queries (e.g., query clustering), which should further improve the web search experience of users.
Improving Web Search Ranking by Incorporating User Behavior Information ABSTRACT We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance. 1. INTRODUCTION Millions of users interact with search engines daily. They issue queries, follow some of the links in the results, click on ads, spend time on pages, reformulate their queries, and perform other actions. These interactions can serve as a valuable source of information for tuning and improving web search result ranking and can compliment more costly explicit judgments. Implicit relevance feedback for ranking and personalization has become an active area of research. Recent work by Joachims and others exploring implicit feedback in controlled environments have shown the value of incorporating implicit feedback into the ranking process. Our motivation for this work is to understand how implicit feedback can be used in a large-scale operational environment to improve retrieval. How does it compare to and compliment evidence from page content, anchor text, or link-based features such as inlinks or PageRank? While it is intuitive that user interactions with the web search engine should reveal at least some information that could be used for ranking, estimating user preferences in real web search settings is a challenging problem, since real user interactions tend to be more "noisy" than commonly assumed in the controlled settings of previous studies. Our paper explores whether implicit feedback can be helpful in realistic environments, where user feedback can be noisy (or adversarial) and a web search engine already uses hundreds of features and is heavily tuned. To this end, we explore different approaches for ranking web search results using real user behavior obtained as part of normal interactions with the web search engine. The specific contributions of this paper include: • Analysis of alternatives for incorporating user behavior into web search ranking (Section 3). • An application of a robust implicit feedback model derived from mining millions of user interactions with a major web search engine (Section 4). • A large scale evaluation over real user queries and search results, showing significant improvements derived from incorporating user feedback (Section 6). We summarize our findings and discuss extensions to the current work in Section 7, which concludes the paper. 2. BACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval. Most common approaches primarily focus on similarity of query and a page, as well as the overall page quality [3,4,24]. However, with increasing popularity of search engines, implicit feedback (i.e., the actions users take when interacting with the search engine) can be used to improve the rankings. Implicit relevance measures have been studied by several research groups. An overview of implicit measures is compiled in Kelly and Teevan [14]. This research, while developing valuable insights into implicit relevance measures, was not applied to improve the ranking of web search results in realistic settings. Closely related to our work, Joachims [11] collected implicit measures in place of explicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions. Fox et al. [8] explored the relationship between implicit and explicit measures in Web search, and developed Bayesian models to correlate implicit measures and explicit relevance judgments for both individual queries and search sessions. This work considered a wide range of user behaviors (e.g., dwell time, scroll time, reformulation patterns) in addition to the popular clickthrough behavior. However, the modeling effort was aimed at predicting explicit relevance judgments from implicit user actions and not specifically at learning ranking functions. Other studies of user behavior in web search include Pharo and Järvelin [19], but were not directly applied to improve ranking. More recently, Joachims et al. [12] presented an empirical evaluation of interpreting clickthrough evidence. By performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthroughs in a controlled, laboratory setting. Unfortunately, the extent to which previous research applies to real-world web search is unclear. At the same time, while recent work (e.g., [26]) on using clickthrough information for improving web search ranking is promising, it captures only one aspect of the user interactions with web search engines. We build on existing research to develop robust user behavior interpretation techniques for the real web search setting. Instead of treating each user as a reliable "expert", we aggregate information from multiple, unreliable, user search session traces, as we describe in the next two sections. 3. INCORPORATING IMPLICIT FEEDBACK 3.1 Implicit Feedback as Independent Evidence 3.2 Ranking with Implicit Feedback Features 3.3 Learning to Rank Web Search Results 4. IMPLICIT USER FEEDBACK MODEL 4.1 Representing User Actions as Features 4.2 Deriving a User Feedback Model 5. EXPERIMENTAL SETUP 5.1 Datasets 5.2 Evaluation Metrics 5.3 Ranking Methods Compared 6. EXPERIMENTAL RESULTS 7. CONCLUSIONS AND FUTURE WORK In this paper we explored the utility of incorporating noisy implicit feedback obtained in a real web search setting to improve web search ranking. We performed a large-scale evaluation over 3,000 queries and more than 12 million user interactions with a major search engine, establishing the utility of incorporating "noisy" implicit feedback to improve web search relevance. We compared two alternatives of incorporating implicit feedback into the search process, namely reranking with implicit feedback and incorporating implicit feedback features directly into the trained ranking function. Our experiments showed significant improvement over methods that do not consider implicit feedback. The gains are particularly dramatic for the top K = 1 result in the final ranking, with precision improvements as high as 31%, and the gains are substantial for all values of K. Our experiments showed that implicit user feedback can further improve web search performance, when incorporated directly with popular content - and link-based features. Interestingly, implicit feedback is particularly valuable for queries with poor original ranking of results (e.g., MAP lower than 0.1). One promising direction for future work is to apply recent research on automatically predicting query difficulty, and only attempt to incorporate implicit feedback for the "difficult" queries. As another research direction we are exploring methods for extending our predictions to the previously unseen queries (e.g., query clustering), which should further improve the web search experience of users.
Improving Web Search Ranking by Incorporating User Behavior Information ABSTRACT We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance. 1. INTRODUCTION Millions of users interact with search engines daily. These interactions can serve as a valuable source of information for tuning and improving web search result ranking and can compliment more costly explicit judgments. Implicit relevance feedback for ranking and personalization has become an active area of research. Recent work by Joachims and others exploring implicit feedback in controlled environments have shown the value of incorporating implicit feedback into the ranking process. Our motivation for this work is to understand how implicit feedback can be used in a large-scale operational environment to improve retrieval. Our paper explores whether implicit feedback can be helpful in realistic environments, where user feedback can be noisy (or adversarial) and a web search engine already uses hundreds of features and is heavily tuned. To this end, we explore different approaches for ranking web search results using real user behavior obtained as part of normal interactions with the web search engine. The specific contributions of this paper include: • Analysis of alternatives for incorporating user behavior into web search ranking (Section 3). • An application of a robust implicit feedback model derived from mining millions of user interactions with a major web search engine (Section 4). • A large scale evaluation over real user queries and search results, showing significant improvements derived from incorporating user feedback (Section 6). We summarize our findings and discuss extensions to the current work in Section 7, which concludes the paper. 2. BACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval. However, with increasing popularity of search engines, implicit feedback (i.e., the actions users take when interacting with the search engine) can be used to improve the rankings. Implicit relevance measures have been studied by several research groups. An overview of implicit measures is compiled in Kelly and Teevan [14]. This research, while developing valuable insights into implicit relevance measures, was not applied to improve the ranking of web search results in realistic settings. Closely related to our work, Joachims [11] collected implicit measures in place of explicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions. Fox et al. [8] explored the relationship between implicit and explicit measures in Web search, and developed Bayesian models to correlate implicit measures and explicit relevance judgments for both individual queries and search sessions. This work considered a wide range of user behaviors (e.g., dwell time, scroll time, reformulation patterns) in addition to the popular clickthrough behavior. However, the modeling effort was aimed at predicting explicit relevance judgments from implicit user actions and not specifically at learning ranking functions. Other studies of user behavior in web search include Pharo and Järvelin [19], but were not directly applied to improve ranking. Unfortunately, the extent to which previous research applies to real-world web search is unclear. At the same time, while recent work (e.g., [26]) on using clickthrough information for improving web search ranking is promising, it captures only one aspect of the user interactions with web search engines. We build on existing research to develop robust user behavior interpretation techniques for the real web search setting. Instead of treating each user as a reliable "expert", we aggregate information from multiple, unreliable, user search session traces, as we describe in the next two sections. 7. CONCLUSIONS AND FUTURE WORK In this paper we explored the utility of incorporating noisy implicit feedback obtained in a real web search setting to improve web search ranking. We performed a large-scale evaluation over 3,000 queries and more than 12 million user interactions with a major search engine, establishing the utility of incorporating "noisy" implicit feedback to improve web search relevance. We compared two alternatives of incorporating implicit feedback into the search process, namely reranking with implicit feedback and incorporating implicit feedback features directly into the trained ranking function. Our experiments showed significant improvement over methods that do not consider implicit feedback. Our experiments showed that implicit user feedback can further improve web search performance, when incorporated directly with popular content - and link-based features. Interestingly, implicit feedback is particularly valuable for queries with poor original ranking of results (e.g., MAP lower than 0.1). One promising direction for future work is to apply recent research on automatically predicting query difficulty, and only attempt to incorporate implicit feedback for the "difficult" queries. As another research direction we are exploring methods for extending our predictions to the previously unseen queries (e.g., query clustering), which should further improve the web search experience of users.
H-79
Beyond PageRank: Machine Learning for Static Ranking
Since the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random).
[ "pagerank", "machin learn", "static rank", "static rank", "ranknet", "inform retriev", "featur-base rank", "adversari classif", "regress", "relev", "visit popular", "search engin" ]
[ "P", "P", "P", "P", "P", "U", "M", "U", "U", "U", "M", "U" ]
Beyond PageRank: Machine Learning for Static Ranking Matthew Richardson Microsoft Research One Microsoft Way Redmond, WA 98052 +1 -LRB-425-RRB- 722-3325 mattri@microsoft.com Amit Prakash MSN One Microsoft Way Redmond, WA 98052 +1 -LRB-425-RRB- 705-6015 amitp@microsoft.com Eric Brill Microsoft Research One Microsoft Way Redmond, WA 98052 +1 -LRB-425-RRB- 705-4992 brill@microsoft.com ABSTRACT Since the publication of Brin and Page``s paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random). Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning. H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval. General Terms Algorithms, Measurement, Performance, Experimentation. 1. INTRODUCTION Over the past decade, the Web has grown exponentially in size. Unfortunately, this growth has not been isolated to good-quality pages. The number of incorrect, spamming, and malicious (e.g., phishing) sites has also grown rapidly. The sheer number of both good and bad pages on the Web has led to an increasing reliance on search engines for the discovery of useful information. Users rely on search engines not only to return pages related to their search query, but also to separate the good from the bad, and order results so that the best pages are suggested first. To date, most work on Web page ranking has focused on improving the ordering of the results returned to the user (querydependent ranking, or dynamic ranking). However, having a good query-independent ranking (static ranking) is also crucially important for a search engine. A good static ranking algorithm provides numerous benefits: • Relevance: The static rank of a page provides a general indicator to the overall quality of the page. This is a useful input to the dynamic ranking algorithm. • Efficiency: Typically, the search engine``s index is ordered by static rank. By traversing the index from highquality to low-quality pages, the dynamic ranker may abort the search when it determines that no later page will have as high of a dynamic rank as those already found. The more accurate the static rank, the better this early-stopping ability, and hence the quicker the search engine may respond to queries. • Crawl Priority: The Web grows and changes as quickly as search engines can crawl it. Search engines need a way to prioritize their crawl-to determine which pages to recrawl, how frequently, and how often to seek out new pages. Among other factors, the static rank of a page is used to determine this prioritization. A better static rank thus provides the engine with a higher quality, more upto-date index. Google is often regarded as the first commercially successful search engine. Their ranking was originally based on the PageRank algorithm [5][27]. Due to this (and possibly due to Google``s promotion of PageRank to the public), PageRank is widely regarded as the best method for the static ranking of Web pages. Though PageRank has historically been thought to perform quite well, there has yet been little academic evidence to support this claim. Even worse, there has recently been work showing that PageRank may not perform any better than other simple measures on certain tasks. Upstill et al. have found that for the task of finding home pages, the number of pages linking to a page and the type of URL were as, or more, effective than PageRank [32]. They found similar results for the task of finding high quality companies [31]. PageRank has also been used in systems for TREC``s very large collection and Web track competitions, but with much less success than had been expected [17]. Finally, Amento et al. [1] found that simple features, such as the number of pages on a site, performed as well as PageRank. Despite these, the general belief remains among many, both academic and in the public, that PageRank is an essential factor for a good static rank. Failing this, it is still assumed that using the link structure is crucial, in the form of the number of inlinks or the amount of anchor text. In this paper, we show there are a number of simple url- or pagebased features that significantly outperform PageRank (for the purposes of statically ranking Web pages) despite ignoring the structure of the Web. We combine these and other static features using machine learning to achieve a ranking system that is significantly better than PageRank (in pairwise agreement with human labels). A machine learning approach for static ranking has other advantages besides the quality of the ranking. Because the measure consists of many features, it is harder for malicious users to manipulate it (i.e., to raise their page``s static rank to an undeserved level through questionable techniques, also known as Web spamming). This is particularly true if the feature set is not known. In contrast, a single measure like PageRank can be easier to manipulate because spammers need only concentrate on one goal: how to cause more pages to point to their page. With an algorithm that learns, a feature that becomes unusable due to spammer manipulation will simply be reduced or removed from the final computation of rank. This flexibility allows a ranking system to rapidly react to new spamming techniques. A machine learning approach to static ranking is also able to take advantage of any advances in the machine learning field. For example, recent work on adversarial classification [12] suggests that it may be possible to explicitly model the Web page spammer``s (the adversary) actions, adjusting the ranking model in advance of the spammer``s attempts to circumvent it. Another example is the elimination of outliers in constructing the model, which helps reduce the effect that unique sites may have on the overall quality of the static rank. By moving static ranking to a machine learning framework, we not only gain in accuracy, but also gain in the ability to react to spammer``s actions, to rapidly add new features to the ranking algorithm, and to leverage advances in the rapidly growing field of machine learning. Finally, we believe there will be significant advantages to using this technique for other domains, such as searching a local hard drive or a corporation``s intranet. These are domains where the link structure is particularly weak (or non-existent), but there are other domain-specific features that could be just as powerful. For example, the author of an intranet page and his/her position in the organization (e.g., CEO, manager, or developer) could provide significant clues as to the importance of that page. A machine learning approach thus allows rapid development of a good static algorithm in new domains. This paper``s contribution is a systematic study of static features, including PageRank, for the purposes of (statically) ranking Web pages. Previous studies on PageRank typically used subsets of the Web that are significantly smaller (e.g., the TREC VLC2 corpus, used by many, contains only 19 million pages). Also, the performance of PageRank and other static features has typically been evaluated in the context of a complete system for dynamic ranking, or for other tasks such as question answering. In contrast, we explore the use of PageRank and other features for the direct task of statically ranking Web pages. We first briefly describe the PageRank algorithm. In Section 3 we introduce RankNet, the machine learning technique used to combine static features into a final ranking. Section 4 describes the static features. The heart of the paper is in Section 5, which presents our experiments and results. We conclude with a discussion of related and future work. 2. PAGERANK The basic idea behind PageRank is simple: a link from a Web page to another can be seen as an endorsement of that page. In general, links are made by people. As such, they are indicative of the quality of the pages to which they point - when creating a page, an author presumably chooses to link to pages deemed to be of good quality. We can take advantage of this linkage information to order Web pages according to their perceived quality. Imagine a Web surfer who jumps from Web page to Web page, choosing with uniform probability which link to follow at each step. In order to reduce the effect of dead-ends or endless cycles the surfer will occasionally jump to a random page with some small probability α, or when on a page with no out-links. If averaged over a sufficient number of steps, the probability the surfer is on page j at some point in time is given by the formula: ∑∈ + − = ji i iP N jP B F )()1( )( α α (1) Where Fi is the set of pages that page i links to, and Bj is the set of pages that link to page j. The PageRank score for node j is defined as this probability: PR(j)=P(j). Because equation (1) is recursive, it must be iteratively evaluated until P(j) converges (typically, the initial distribution for P(j) is uniform). The intuition is, because a random surfer would end up at the page more frequently, it is likely a better page. An alternative view for equation (1) is that each page is assigned a quality, P(j). A page gives an equal share of its quality to each page it points to. PageRank is computationally expensive. Our collection of 5 billion pages contains approximately 370 billion links. Computing PageRank requires iterating over these billions of links multiple times (until convergence). It requires large amounts of memory (or very smart caching schemes that slow the computation down even further), and if spread across multiple machines, requires significant communication between them. Though much work has been done on optimizing the PageRank computation (see e.g., [25] and [6]), it remains a relatively slow, computationally expensive property to compute. 3. RANKNET Much work in machine learning has been done on the problems of classification and regression. Let X={xi} be a collection of feature vectors (typically, a feature is any real valued number), and Y={yi} be a collection of associated classes, where yi is the class of the object described by feature vector xi. The classification problem is to learn a function f that maps yi=f(xi), for all i. When yi is real-valued as well, this is called regression. Static ranking can be seen as a regression problem. If we let xi represent features of page i, and yi be a value (say, the rank) for each page, we could learn a regression function that mapped each page``s features to their rank. However, this over-constrains the problem we wish to solve. All we really care about is the order of the pages, not the actual value assigned to them. Recent work on this ranking problem [7][13][18] directly attempts to optimize the ordering of the objects, rather than the value assigned to them. For these, let Z={<i,j>} be a collection of pairs of items, where item i should be assigned a higher value than item j. The goal of the ranking problem, then, is to learn a function f such that, )()(,, ji ffji xxZ >∈∀ 708 Note that, as with learning a regression function, the result of this process is a function (f) that maps feature vectors to real values. This function can still be applied anywhere that a regressionlearned function could be applied. The only difference is the technique used to learn the function. By directly optimizing the ordering of objects, these methods are able to learn a function that does a better job of ranking than do regression techniques. We used RankNet [7], one of the aforementioned techniques for learning ranking functions, to learn our static rank function. RankNet is a straightforward modification to the standard neural network back-prop algorithm. As with back-prop, RankNet attempts to minimize the value of a cost function by adjusting each weight in the network according to the gradient of the cost function with respect to that weight. The difference is that, while a typical neural network cost function is based on the difference between the network output and the desired output, the RankNet cost function is based on the difference between a pair of network outputs. That is, for each pair of feature vectors <i,j> in the training set, RankNet computes the network outputs oi and oj. Since vector i is supposed to be ranked higher than vector j, the larger is oj-oi, the larger the cost. RankNet also allows the pairs in Z to be weighted with a confidence (posed as the probability that the pair satisfies the ordering induced by the ranking function). In this paper, we used a probability of one for all pairs. In the next section, we will discuss the features used in our feature vectors, xi. 4. FEATURES To apply RankNet (or other machine learning techniques) to the ranking problem, we needed to extract a set of features from each page. We divided our feature set into four, mutually exclusive, categories: page-level (Page), domain-level (Domain), anchor text and inlinks (Anchor), and popularity (Popularity). We also optionally used the PageRank of a page as a feature. Below, we describe each of these feature categories in more detail. PageRank We computed PageRank on a Web graph of 5 billion crawled pages (and 20 billion known URLs linked to by these pages). This represents a significant portion of the Web, and is approximately the same number of pages as are used by Google, Yahoo, and MSN for their search engines. Because PageRank is a graph-based algorithm, it is important that it be run on as large a subset of the Web as possible. Most previous studies on PageRank used subsets of the Web that are significantly smaller (e.g. the TREC VLC2 corpus, used by many, contains only 19 million pages) We computed PageRank using the standard value of 0.85 for α. Popularity Another feature we used is the actual popularity of a Web page, measured as the number of times that it has been visited by users over some period of time. We have access to such data from users who have installed the MSN toolbar and have opted to provide it to MSN. The data is aggregated into a count, for each Web page, of the number of users who viewed that page. Though popularity data is generally unavailable, there are two other sources for it. The first is from proxy logs. For example, a university that requires its students to use a proxy has a record of all the pages they have visited while on campus. Unfortunately, proxy data is quite biased and relatively small. Another source, internal to search engines, are records of which results their users clicked on. Such data was used by the search engine Direct Hit, and has recently been explored for dynamic ranking purposes [20]. An advantage of the toolbar data over this is that it contains information about URL visits that are not just the result of a search. The raw popularity is processed into a number of features such as the number of times a page was viewed and the number of times any page in the domain was viewed. More details are provided in section 5.5. Anchor text and inlinks These features are based on the information associated with links to the page in question. It includes features such as the total amount of text in links pointing to the page (anchor text), the number of unique words in that text, etc.. Page This category consists of features which may be determined by looking at the page (and its URL) alone. We used only eight, simple features such as the number of words in the body, the frequency of the most common term, etc.. Domain This category contains features that are computed as averages across all pages in the domain. For example, the average number of outlinks on any page and the average PageRank. Many of these features have been used by others for ranking Web pages, particularly the anchor and page features. As mentioned, the evaluation is typically for dynamic ranking, and we wish to evaluate the use of them for static ranking. Also, to our knowledge, this is the first study on the use of actual page visitation popularity for static ranking. The closest similar work is on using click-through behavior (that is, which search engine results the users click on) to affect dynamic ranking (see e.g., [20]). Because we use a wide variety of features to come up with a static ranking, we refer to this as fRank (for feature-based ranking). fRank uses RankNet and the set of features described in this section to learn a ranking function for Web pages. Unless otherwise specified, fRank was trained with all of the features. 5. EXPERIMENTS In this section, we will demonstrate that we can out perform PageRank by applying machine learning to a straightforward set of features. Before the results, we first discuss the data, the performance metric, and the training method. 5.1 Data In order to evaluate the quality of a static ranking, we needed a gold standard defining the correct ordering for a set of pages. For this, we employed a dataset which contains human judgments for 28000 queries. For each query, a number of results are manually assigned a rating, from 0 to 4, by human judges. The rating is meant to be a measure of how relevant the result is for the query, where 0 means poor and 4 means excellent. There are approximately 500k judgments in all, or an average of 18 ratings per query. The queries are selected by randomly choosing queries from among those issued to the MSN search engine. The probability that a query is selected is proportional to its frequency among all 709 of the queries. As a result, common queries are more likely to be judged than uncommon queries. As an example of how diverse the queries are, the first four queries in the training set are chef schools, chicagoland speedway, eagles fan club, and Turkish culture. The documents selected for judging are those that we expected would, on average, be reasonably relevant (for example, the top ten documents returned by MSN``s search engine). This provides significantly more information than randomly selecting documents on the Web, the vast majority of which would be irrelevant to a given query. Because of this process, the judged pages tend to be of higher quality than the average page on the Web, and tend to be pages that will be returned for common search queries. This bias is good when evaluating the quality of static ranking for the purposes of index ordering and returning relevant documents. This is because the most important portion of the index to be well-ordered and relevant is the portion that is frequently returned for search queries. Because of this bias, however, the results in this paper are not applicable to crawl prioritization. In order to obtain experimental results on crawl prioritization, we would need ratings on a random sample of Web pages. To convert the data from query-dependent to query-independent, we simply removed the query, taking the maximum over judgments for a URL that appears in more than one query. The reasoning behind this is that a page that is relevant for some query and irrelevant for another is probably a decent page and should have a high static rank. Because we evaluated the pages on queries that occur frequently, our data indicates the correct index ordering, and assigns high value to pages that are likely to be relevant to a common query. We randomly assigned queries to a training, validation, or test set, such that they contained 84%, 8%, and 8% of the queries, respectively. Each set contains all of the ratings for a given query, and no query appears in more than one set. The training set was used to train fRank. The validation set was used to select the model that had the highest performance. The test set was used for the final results. This data gives us a query-independent ordering of pages. The goal for a static ranking algorithm will be to reproduce this ordering as closely as possible. In the next section, we describe the measure we used to evaluate this. 5.2 Measure We chose to use pairwise accuracy to evaluate the quality of a static ranking. The pairwise accuracy is the fraction of time that the ranking algorithm and human judges agree on the ordering of a pair of Web pages. If S(x) is the static ranking assigned to page x, and H(x) is the human judgment of relevance for x, then consider the following sets: )}()(:,{ yHxHyx >=pH and )}()(:,{ ySxSyx >=pS The pairwise accuracy is the portion of Hp that is also contained in Sp: p pp H SH ∩ =accuracypairwise This measure was chosen for two reasons. First, the discrete human judgments provide only a partial ordering over Web pages, making it difficult to apply a measure such as the Spearman rank order correlation coefficient (in the pairwise accuracy measure, a pair of documents with the same human judgment does not affect the score). Second, the pairwise accuracy has an intuitive meaning: it is the fraction of pairs of documents that, when the humans claim one is better than the other, the static rank algorithm orders them correctly. 5.3 Method We trained fRank (a RankNet based neural network) using the following parameters. We used a fully connected 2 layer network. The hidden layer had 10 hidden nodes. The input weights to this layer were all initialized to be zero. The output layer (just a single node) weights were initialized using a uniform random distribution in the range [-0.1, 0.1]. We used tanh as the transfer function from the inputs to the hidden layer, and a linear function from the hidden layer to the output. The cost function is the pairwise cross entropy cost function as discussed in section 3. The features in the training set were normalized to have zero mean and unit standard deviation. The same linear transformation was then applied to the features in the validation and test sets. For training, we presented the network with 5 million pairings of pages, where one page had a higher rating than the other. The pairings were chosen uniformly at random (with replacement) from all possible pairings. When forming the pairs, we ignored the magnitude of the difference between the ratings (the rating spread) for the two URLs. Hence, the weight for each pair was constant (one), and the probability of a pair being selected was independent of its rating spread. We trained the network for 30 epochs. On each epoch, the training pairs were randomly shuffled. The initial training rate was 0.001. At each epoch, we checked the error on the training set. If the error had increased, then we decreased the training rate, under the hypothesis that the network had probably overshot. The training rate at each epoch was thus set to: Training rate = 1+ε κ Where κ is the initial rate (0.001), and ε is the number of times the training set error has increased. After each epoch, we measured the performance of the neural network on the validation set, using 1 million pairs (chosen randomly with replacement). The network with the highest pairwise accuracy on the validation set was selected, and then tested on the test set. We report the pairwise accuracy on the test set, calculated using all possible pairs. These parameters were determined and fixed before the static rank experiments in this paper. In particular, the choice of initial training rate, number of epochs, and training rate decay function were taken directly from Burges et al [7]. Though we had the option of preprocessing any of the features before they were input to the neural network, we refrained from doing so on most of them. The only exception was the popularity features. As with most Web phenomenon, we found that the distribution of site popularity is Zipfian. To reduce the dynamic range, and hopefully make the feature more useful, we presented the network with both the unpreprocessed, as well as the logarithm, of the popularity features (As with the others, the logarithmic feature values were also normalized to have zero mean and unit standard deviation). 710 Applying fRank to a document is computationally efficient, taking time that is only linear in the number of input features; it is thus within a constant factor of other simple machine learning methods such as naïve Bayes. In our experiments, computing the fRank for all five billion Web pages was approximately 100 times faster than computing the PageRank for the same set. 5.4 Results As Table 1 shows, fRank significantly outperforms PageRank for the purposes of static ranking. With a pairwise accuracy of 67.4%, fRank more than doubles the accuracy of PageRank (relative to the baseline of 50%, which is the accuracy that would be achieved by a random ordering of Web pages). Note that one of fRank``s input features is the PageRank of the page, so we would expect it to perform no worse than PageRank. The significant increase in accuracy implies that the other features (anchor, popularity, etc.) do in fact contain useful information regarding the overall quality of a page. Table 1: Basic Results Technique Accuracy (%) None (Baseline) 50.00 PageRank 56.70 fRank 67.43 There are a number of decisions that go into the computation of PageRank, such as how to deal with pages that have no outlinks, the choice of α, numeric precision, convergence threshold, etc.. We were able to obtain a computation of PageRank from a completely independent implementation (provided by Marc Najork) that varied somewhat in these parameters. It achieved a pairwise accuracy of 56.52%, nearly identical to that obtained by our implementation. We thus concluded that the quality of the PageRank is not sensitive to these minor variations in algorithm, nor was PageRank``s low accuracy due to problems with our implementation of it. We also wanted to find how well each feature set performed. To answer this, for each feature set, we trained and tested fRank using only that set of features. The results are shown in Table 2. As can be seen, every single feature set individually outperformed PageRank on this test. Perhaps the most interesting result is that the Page-level features had the highest performance out of all the feature sets. This is surprising because these are features that do not depend on the overall graph structure of the Web, nor even on what pages point to a given page. This is contrary to the common belief that the Web graph structure is the key to finding a good static ranking of Web pages. Table 2: Results for individual feature sets. Feature Set Accuracy (%) PageRank 56.70 Popularity 60.82 Anchor 59.09 Page 63.93 Domain 59.03 All Features 67.43 Because we are using a two-layer neural network, the features in the learned network can interact with each other in interesting, nonlinear ways. This means that a particular feature that appears to have little value in isolation could actually be very important when used in combination with other features. To measure the final contribution of a feature set, in the context of all the other features, we performed an ablation study. That is, for each set of features, we trained a network to contain all of the features except that set. We then compared the performance of the resulting network to the performance of the network with all of the features. Table 3 shows the results of this experiment, where the decrease in accuracy is the difference in pairwise accuracy between the network trained with all of the features, and the network missing the given feature set. Table 3: Ablation study. Shown is the decrease in accuracy when we train a network that has all but the given set of features. The last line is shows the effect of removing the anchor, PageRank, and domain features, hence a model containing no network or link-based information whatsoever. Feature Set Decrease in Accuracy PageRank 0.18 Popularity 0.78 Anchor 0.47 Page 5.42 Domain Anchor, PageRank & Domain 0.10 0.60 The results of the ablation study are consistent with the individual feature set study. Both show that the most important feature set is the Page-level feature set, and the second most important is the popularity feature set. Finally, we wished to see how the performance of fRank improved as we added features; we wanted to find at what point adding more feature sets became relatively useless. Beginning with no features, we greedily added the feature set that improved performance the most. The results are shown in Table 4. For example, the fourth line of the table shows that fRank using the page, popularity, and anchor features outperformed any network that used the page, popularity, and some other feature set, and that the performance of this network was 67.25%. Table 4: fRank performance as feature sets are added. At each row, the feature set that gave the greatest increase in accuracy was added to the list of features (i.e., we conducted a greedy search over feature sets). Feature Set Accuracy (%) None 50.00 +Page 63.93 +Popularity 66.83 +Anchor 67.25 +PageRank 67.31 +Domain 67.43 711 Finally, we present a qualitative comparison of PageRank vs. fRank. In Table 5 are the top ten URLs returned for PageRank and for fRank. PageRank``s results are heavily weighted towards technology sites. It contains two QuickTime URLs (Apple``s video playback software), as well as Internet Explorer and FireFox URLs (both of which are Web browsers). fRank, on the other hand, contains more consumer-oriented sites such as American Express, Target, Dell, etc.. PageRank``s bias toward technology can be explained through two processes. First, there are many pages with buttons at the bottom suggesting that the site is optimized for Internet Explorer, or that the visitor needs QuickTime. These generally link back to, in these examples, the Internet Explorer and QuickTime download sites. Consequently, PageRank ranks those pages highly. Though these pages are important, they are not as important as it may seem by looking at the link structure alone. One fix for this is to add information about the link to the PageRank computation, such as the size of the text, whether it was at the bottom of the page, etc.. The other bias comes from the fact that the population of Web site authors is different than the population of Web users. Web authors tend to be technologically-oriented, and thus their linking behavior reflects those interests. fRank, by knowing the actual visitation popularity of a site (the popularity feature set), is able to eliminate some of that bias. It has the ability to depend more on where actual Web users visit rather than where the Web site authors have linked. The results confirm that fRank outperforms PageRank in pairwise accuracy. The two most important feature sets are the page and popularity features. This is surprising, as the page features consisted only of a few (8) simple features. Further experiments found that, of the page features, those based on the text of the page (as opposed to the URL) performed the best. In the next section, we explore the popularity feature in more detail. 5.5 Popularity Data As mentioned in section 4, our popularity data came from MSN toolbar users. For privacy reasons, we had access only to an aggregate count of, for each URL, how many times it was visited by any toolbar user. This limited the possible features we could derive from this data. For possible extensions, see section 6.3, future work. For each URL in our train and test sets, we provided a feature to fRank which was how many times it had been visited by a toolbar user. However, this feature was quite noisy and sparse, particularly for URLs with query parameters (e.g., http://search.msn.com/results.aspx?q=machine+learning&form=QBHP). One solution was to provide an additional feature which was the number of times any URL at the given domain was visited by a toolbar user. Adding this feature dramatically improved the performance of fRank. We took this one step further and used the built-in hierarchical structure of URLs to construct many levels of backoff between the full URL and the domain. We did this by using the set of features shown in Table 6. Table 6: URL functions used to compute the Popularity feature set. Function Example Exact URL cnn.com/2005/tech/wikipedia.html?v=mobile No Params cnn.com/2005/tech/wikipedia.html Page wikipedia.html URL-1 cnn.com/2005/tech URL-2 cnn.com/2005 ... Domain cnn.com Domain+1 cnn.com/2005 ... Each URL was assigned one feature for each function shown in the table. The value of the feature was the count of the number of times a toolbar user visited a URL, where the function applied to that URL matches the function applied to the URL in question. For example, a user``s visit to cnn.com/2005/sports.html would increment the Domain and Domain+1 features for the URL cnn.com/2005/tech/wikipedia.html. As seen in Table 7, adding the domain counts significantly improved the quality of the popularity feature, and adding the numerous backoff functions listed in Table 6 improved the accuracy even further. Table 7: Effect of adding backoff to the popularity feature set Features Accuracy (%) URL count 58.15 URL and Domain counts 59.31 All backoff functions (Table 6) 60.82 Table 5: Top ten URLs for PageRank vs. fRank PageRank fRank google.com google.com apple.com/quicktime/download yahoo.com amazon.com americanexpress.com yahoo.com hp.com microsoft.com/windows/ie target.com apple.com/quicktime bestbuy.com mapquest.com dell.com ebay.com autotrader.com mozilla.org/products/firefox dogpile.com ftc.gov bankofamerica.com 712 Backing off to subsets of the URL is one technique for dealing with the sparsity of data. It is also informative to see how the performance of fRank depends on the amount of popularity data that we have collected. In Figure 1 we show the performance of fRank trained with only the popularity feature set vs. the amount of data we have for the popularity feature set. Each day, we receive additional popularity data, and as can be seen in the plot, this increases the performance of fRank. The relation is logarithmic: doubling the amount of popularity data provides a constant improvement in pairwise accuracy. In summary, we have found that the popularity features provide a useful boost to the overall fRank accuracy. Gathering more popularity data, as well as employing simple backoff strategies, improve this boost even further. 5.6 Summary of Results The experiments provide a number of conclusions. First, fRank performs significantly better than PageRank, even without any information about the Web graph. Second, the page level and popularity features were the most significant contributors to pairwise accuracy. Third, by collecting more popularity data, we can continue to improve fRank``s performance. The popularity data provides two benefits to fRank. First, we see that qualitatively, fRank``s ordering of Web pages has a more favorable bias than PageRank``s. fRank``s ordering seems to correspond to what Web users, rather than Web page authors, prefer. Second, the popularity data is more timely than PageRank``s link information. The toolbar provides information about which Web pages people find interesting right now, whereas links are added to pages more slowly, as authors find the time and interest. 6. RELATED AND FUTURE WORK 6.1 Improvements to PageRank Since the original PageRank paper, there has been work on improving it. Much of that work centers on speeding up and parallelizing the computation [15][25]. One recognized problem with PageRank is that of topic drift: A page about dogs will have high PageRank if it is linked to by many pages that themselves have high rank, regardless of their topic. In contrast, a search engine user looking for good pages about dogs would likely prefer to find pages that are pointed to by many pages that are themselves about dogs. Hence, a link that is on topic should have higher weight than a link that is not. Richardson and Domingos``s Query Dependent PageRank [29] and Haveliwala``s Topic-Sensitive PageRank [16] are two approaches that tackle this problem. Other variations to PageRank include differently weighting links for inter- vs. intra-domain links, adding a backwards step to the random surfer to simulate the back button on most browsers [24] and modifying the jump probability (α) [3]. See Langville and Meyer [23] for a good survey of these, and other modifications to PageRank. 6.2 Other related work PageRank is not the only link analysis algorithm used for ranking Web pages. The most well-known other is HITS [22], which is used by the Teoma search engine [30]. HITS produces a list of hubs and authorities, where hubs are pages that point to many authority pages, and authorities are pages that are pointed to by many hubs. Previous work has shown HITS to perform comparably to PageRank [1]. One field of interest is that of static index pruning (see e.g., Carmel et al. [8]). Static index pruning methods reduce the size of the search engine``s index by removing documents that are unlikely to be returned by a search query. The pruning is typically done based on the frequency of query terms. Similarly, Pandey and Olston [28] suggest crawling pages frequently if they are likely to incorrectly appear (or not appear) as a result of a search. Similar methods could be incorporated into the static rank (e.g., how many frequent queries contain words found on this page). Others have investigated the effect that PageRank has on the Web at large [9]. They argue that pages with high PageRank are more likely to be found by Web users, thus more likely to be linked to, and thus more likely to maintain a higher PageRank than other pages. The same may occur for the popularity data. If we increase the ranking for popular pages, they are more likely to be clicked on, thus further increasing their popularity. Cho et al. [10] argue that a more appropriate measure of Web page quality would depend on not only the current link structure of the Web, but also on the change in that link structure. The same technique may be applicable to popularity data: the change in popularity of a page may be more informative than the absolute popularity. One interesting related work is that of Ivory and Hearst [19]. Their goal was to build a model of Web sites that are considered high quality from the perspective of content, structure and navigation, visual design, functionality, interactivity, and overall experience. They used over 100 page level features, as well as features encompassing the performance and structure of the site. This let them qualitatively describe the qualities of a page that make it appear attractive (e.g., rare use of italics, at least 9 point font, ...), and (in later work) to build a system that assists novel Web page authors in creating quality pages by evaluating it according to these features. The primary differences between this work and ours are the goal (discovering what constitutes a good Web page vs. ordering Web pages for the purposes of Web search), the size of the study (they used a dataset of less than 6000 pages vs. our set of 468,000), and our comparison with PageRank. y = 0.577Ln(x) + 58.283 R 2 = 0.9822 58 58.5 59 59.5 60 60.5 61 1 10 100 Days of Toolbar Data PairwiseAccuracy Figure 1: Relation between the amount of popularity data and the performance of the popularity feature set. Note the x-axis is a logarithmic scale. 713 Nevertheless, their work provides insights to additional useful static features that we could incorporate into fRank in the future. Recent work on incorporating novel features into dynamic ranking includes that by Joachims et al. [21], who investigate the use of implicit feedback from users, in the form of which search engine results are clicked on. Craswell et al. [11] present a method for determining the best transformation to apply to query independent features (such as those used in this paper) for the purposes of improving dynamic ranking. Other work, such as Boyan et al. [4] and Bartell et al. [2] apply machine learning for the purposes of improving the overall relevance of a search engine (i.e., the dynamic ranking). They do not apply their techniques to the problem of static ranking. 6.3 Future work There are many ways in which we would like to extend this work. First, fRank uses only a small number of features. We believe we could achieve even more significant results with more features. In particular the existence, or lack thereof, of certain words could prove very significant (for instance, under construction probably signifies a low quality page). Other features could include the number of images on a page, size of those images, number of layout elements (tables, divs, and spans), use of style sheets, conforming to W3C standards (like XHTML 1.0 Strict), background color of a page, etc.. Many pages are generated dynamically, the contents of which may depend on parameters in the URL, the time of day, the user visiting the site, or other variables. For such pages, it may be useful to apply the techniques found in [26] to form a static approximation for the purposes of extracting features. The resulting grammar describing the page could itself be a source of additional features describing the complexity of the page, such as how many non-terminal nodes it has, the depth of the grammar tree, etc. fRank allows one to specify a confidence in each pairing of documents. In the future, we will experiment with probabilities that depend on the difference in human judgments between the two items in the pair. For example, a pair of documents where one was rated 4 and the other 0 should have a higher confidence than a pair of documents rated 3 and 2. The experiments in this paper are biased toward pages that have higher than average quality. Also, fRank with all of the features can only be applied to pages that have already been crawled. Thus, fRank is primarily useful for index ordering and improving relevance, not for directing the crawl. We would like to investigate a machine learning approach for crawl prioritization as well. It may be that a combination of methods is best: for example, using PageRank to select the best 5 billion of the 20 billion pages on the Web, then using fRank to order the index and affect search relevancy. Another interesting direction for exploration is to incorporate fRank and page-level features directly into the PageRank computation itself. Work on biasing the PageRank jump vector [16], and transition matrix [29], have demonstrated the feasibility and advantages of such an approach. There is reason to believe that a direct application of [29], using the fRank of a page for its relevance, could lead to an improved overall static rank. Finally, the popularity data can be used in other interesting ways. The general surfing and searching habits of Web users varies by time of day. Activity in the morning, daytime, and evening are often quite different (e.g., reading the news, solving problems, and accessing entertainment, respectively). We can gain insight into these differences by using the popularity data, divided into segments of the day. When a query is issued, we would then use the popularity data matching the time of query in order to do the ranking of Web pages. We also plan to explore popularity features that use more than just the counts of how often a page was visited. For example, how long users tended to dwell on a page, did they leave the page by clicking a link or by hitting the back button, etc.. Fox et al. did a study that showed that features such as this can be valuable for the purposes of dynamic ranking [14]. Finally, the popularity data could be used as the label rather than as a feature. Using fRank in this way to predict the popularity of a page may useful for the tasks of relevance, efficiency, and crawl priority. There is also significantly more popularity data than human labeled data, potentially enabling more complex machine learning methods, and significantly more features. 7. CONCLUSIONS A good static ranking is an important component for today``s search engines and information retrieval systems. We have demonstrated that PageRank does not provide a very good static ranking; there are many simple features that individually out perform PageRank. By combining many static features, fRank achieves a ranking that has a significantly higher pairwise accuracy than PageRank alone. A qualitative evaluation of the top documents shows that fRank is less technology-biased than PageRank; by using popularity data, it is biased toward pages that Web users, rather than Web authors, visit. The machine learning component of fRank gives it the additional benefit of being more robust against spammers, and allows it to leverage further developments in the machine learning community in areas such as adversarial classification. We have only begun to explore the options, and believe that significant strides can be made in the area of static ranking by further experimentation with additional features, other machine learning techniques, and additional sources of data. 8. ACKNOWLEDGMENTS Thank you to Marc Najork for providing us with additional PageRank computations and to Timo Burkard for assistance with the popularity data. Many thanks to Chris Burges for providing code and significant support in using training RankNets. Also, we thank Susan Dumais and Nick Craswell for their edits and suggestions. 9. REFERENCES [1] B. Amento, L. Terveen, and W. Hill. Does authority mean quality? Predicting expert quality ratings of Web documents. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2000. [2] B. Bartell, G. Cottrell, and R. Belew. Automatic combination of multiple ranked retrieval systems. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1994. [3] P. Boldi, M. Santini, and S. Vigna. PageRank as a function of the damping factor. In Proceedings of the International World Wide Web Conference, May 2005. 714 [4] J. Boyan, D. Freitag, and T. Joachims. A machine learning architecture for optimizing web search engines. In AAAI Workshop on Internet Based Information Systems, August 1996. [5] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. In Proceedings of the Seventh International Wide Web Conference, Brisbane, Australia, 1998. Elsevier. [6] A. Broder, R. Lempel, F. Maghoul, and J. Pederson. Efficient PageRank approximation via graph aggregation. In Proceedings of the International World Wide Web Conference, May 2004. [7] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, 2005. [8] D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y. S. Maarek, and A. Soffer. Static index pruning for information retrieval systems. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 43-50, New Orleans, Louisiana, USA, September 2001. [9] J. Cho and S. Roy. Impact of search engines on page popularity. In Proceedings of the International World Wide Web Conference, May 2004. [10]J. Cho, S. Roy, R. Adams. Page Quality: In search of an unbiased web ranking. In Proceedings of the ACM SIGMOD 2005 Conference. Baltimore, Maryland. June 2005. [11]N. Craswell, S. Robertson, H. Zaragoza, and M. Taylor. Relevance weighting for query independent evidence. In Proceedings of the 28th Annual Conference on Research and Development in Information Retrieval (SIGIR), August, 2005. [12]N. Dalvi, P. Domingos, Mausam, S. Sanghai, D. Verma. Adversarial Classification. In Proceedings of the Tenth International Conference on Knowledge Discovery and Data Mining (pp. 99-108), Seattle, WA, 2004. [13]O. Dekel, C. Manning, and Y. Singer. Log-linear models for label-ranking. In Advances in Neural Information Processing Systems 16. Cambridge, MA: MIT Press, 2003. [14]S. Fox, K S. Fox, K. Karnawat, M. Mydland, S. T. Dumais and T. White (2005). Evaluating implicit measures to improve the search experiences. In the ACM Transactions on Information Systems, 23(2), pp. 147-168. April 2005. [15]T. Haveliwala. Efficient computation of PageRank. Stanford University Technical Report, 1999. [16]T. Haveliwala. Topic-sensitive PageRank. In Proceedings of the International World Wide Web Conference, May 2002. [17]D. Hawking and N. Craswell. Very large scale retrieval and Web search. In D. Harman and E. Voorhees (eds), The TREC Book. MIT Press. [18]R. Herbrich, T. Graepel, and K. Obermayer. Support vector learning for ordinal regression. In Proceedings of the Ninth International Conference on Artificial Neural Networks, pp. 97-102. 1999. [19]M. Ivory and M. Hearst. Statistical profiles of highly-rated Web sites. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 2002. [20]T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), 2002. [21]T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G. Gay. Accurately Interpreting Clickthrough Data as Implicit Feedback. In Proceedings of the Conference on Research and Development in Information Retrieval (SIGIR), 2005. [22]J. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM 46:5, pp. 604-32. 1999. [23]A. Langville and C. Meyer. Deeper inside PageRank. Internet Mathematics 1(3):335-380, 2004. [24]F. Matthieu and M. Bouklit. The effect of the back button in a random walk: application for PageRank. In Alternate track papers and posters of the Thirteenth International World Wide Web Conference, 2004. [25]F. McSherry. A uniform approach to accelerated PageRank computation. In Proceedings of the International World Wide Web Conference, May 2005. [26]Y. Minamide. Static approximation of dynamically generated Web pages. In Proceedings of the International World Wide Web Conference, May 2005. [27]L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing order to the web. Technical report, Stanford University, Stanford, CA, 1998. [28]S. Pandey and C. Olston. User-centric Web crawling. In Proceedings of the International World Wide Web Conference, May 2005. [29]M. Richardson and P. Domingos. The intelligent surfer: probabilistic combination of link and content information in PageRank. In Advances in Neural Information Processing Systems 14, pp. 1441-1448. Cambridge, MA: MIT Press, 2002. [30]C. Sherman. Teoma vs. Google, Round 2. Available from World Wide Web (http://dc.internet.com/news/article.php/ 1002061), 2002. [31]T. Upstill, N. Craswell, and D. Hawking. Predicting fame and fortune: PageRank or indegree? . In the Eighth Australasian Document Computing Symposium. 2003. [32]T. Upstill, N. Craswell, and D. Hawking. Query-independent evidence in home page finding. In ACM Transactions on Information Systems. 2003. 715
Beyond PageRank: Machine Learning for Static Ranking ABSTRACT Since the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random). 1. INTRODUCTION Over the past decade, the Web has grown exponentially in size. Unfortunately, this growth has not been isolated to good-quality pages. The number of incorrect, spamming, and malicious (e.g., phishing) sites has also grown rapidly. The sheer number of both good and bad pages on the Web has led to an increasing reliance on search engines for the discovery of useful information. Users rely on search engines not only to return pages related to their search query, but also to separate the good from the bad, and order results so that the best pages are suggested first. To date, most work on Web page ranking has focused on improving the ordering of the results returned to the user (querydependent ranking, or dynamic ranking). However, having a good query-independent ranking (static ranking) is also crucially important for a search engine. A good static ranking algorithm provides numerous benefits: • Relevance: The static rank of a page provides a general indicator to the overall quality of the page. This is a useful input to the dynamic ranking algorithm. • Efficiency: Typically, the search engine's index is ordered by static rank. By traversing the index from highquality to low-quality pages, the dynamic ranker may abort the search when it determines that no later page will have as high of a dynamic rank as those already found. The more accurate the static rank, the better this early-stopping ability, and hence the quicker the search engine may respond to queries. • Crawl Priority: The Web grows and changes as quickly as search engines can crawl it. Search engines need a way to prioritize their crawl--to determine which pages to recrawl, how frequently, and how often to seek out new pages. Among other factors, the static rank of a page is used to determine this prioritization. A better static rank thus provides the engine with a higher quality, more upto-date index. Google is often regarded as the first commercially successful search engine. Their ranking was originally based on the PageRank algorithm [5] [27]. Due to this (and possibly due to Google's promotion of PageRank to the public), PageRank is widely regarded as the best method for the static ranking of Web pages. Though PageRank has historically been thought to perform quite well, there has yet been little academic evidence to support this claim. Even worse, there has recently been work showing that PageRank may not perform any better than other simple measures on certain tasks. Upstill et al. have found that for the task of finding home pages, the number of pages linking to a page and the type of URL were as, or more, effective than PageRank [32]. They found similar results for the task of finding high quality companies [31]. PageRank has also been used in systems for TREC's "very large collection" and "Web track" competitions, but with much less success than had been expected [17]. Finally, Amento et al. [1] found that simple features, such as the number of pages on a site, performed as well as PageRank. Despite these, the general belief remains among many, both academic and in the public, that PageRank is an essential factor for a good static rank. Failing this, it is still assumed that using the link structure is crucial, in the form of the number of inlinks or the amount of anchor text. In this paper, we show there are a number of simple url - or pagebased features that significantly outperform PageRank (for the purposes of statically ranking Web pages) despite ignoring the structure of the Web. We combine these and other static features using machine learning to achieve a ranking system that is significantly better than PageRank (in pairwise agreement with human labels). A machine learning approach for static ranking has other advantages besides the quality of the ranking. Because the measure consists of many features, it is harder for malicious users to manipulate it (i.e., to raise their page's static rank to an undeserved level through questionable techniques, also known as Web spamming). This is particularly true if the feature set is not known. In contrast, a single measure like PageRank can be easier to manipulate because spammers need only concentrate on one goal: how to cause more pages to point to their page. With an algorithm that learns, a feature that becomes unusable due to spammer manipulation will simply be reduced or removed from the final computation of rank. This flexibility allows a ranking system to rapidly react to new spamming techniques. A machine learning approach to static ranking is also able to take advantage of any advances in the machine learning field. For example, recent work on adversarial classification [12] suggests that it may be possible to explicitly model the Web page spammer's (the adversary) actions, adjusting the ranking model in advance of the spammer's attempts to circumvent it. Another example is the elimination of outliers in constructing the model, which helps reduce the effect that unique sites may have on the overall quality of the static rank. By moving static ranking to a machine learning framework, we not only gain in accuracy, but also gain in the ability to react to spammer's actions, to rapidly add new features to the ranking algorithm, and to leverage advances in the rapidly growing field of machine learning. Finally, we believe there will be significant advantages to using this technique for other domains, such as searching a local hard drive or a corporation's intranet. These are domains where the link structure is particularly weak (or non-existent), but there are other domain-specific features that could be just as powerful. For example, the author of an intranet page and his/her position in the organization (e.g., CEO, manager, or developer) could provide significant clues as to the importance of that page. A machine learning approach thus allows rapid development of a good static algorithm in new domains. This paper's contribution is a systematic study of static features, including PageRank, for the purposes of (statically) ranking Web pages. Previous studies on PageRank typically used subsets of the Web that are significantly smaller (e.g., the TREC VLC2 corpus, used by many, contains only 19 million pages). Also, the performance of PageRank and other static features has typically been evaluated in the context of a complete system for dynamic ranking, or for other tasks such as question answering. In contrast, we explore the use of PageRank and other features for the direct task of statically ranking Web pages. We first briefly describe the PageRank algorithm. In Section 3 we introduce RankNet, the machine learning technique used to combine static features into a final ranking. Section 4 describes the static features. The heart of the paper is in Section 5, which presents our experiments and results. We conclude with a discussion of related and future work. 2. PAGERANK The basic idea behind PageRank is simple: a link from a Web page to another can be seen as an endorsement of that page. In general, links are made by people. As such, they are indicative of the quality of the pages to which they point--when creating a page, an author presumably chooses to link to pages deemed to be of good quality. We can take advantage of this linkage information to order Web pages according to their perceived quality. Imagine a Web surfer who jumps from Web page to Web page, choosing with uniform probability which link to follow at each step. In order to reduce the effect of dead-ends or endless cycles the surfer will occasionally jump to a random page with some small probability α, or when on a page with no out-links. If averaged over a sufficient number of steps, the probability the surfer is on page j at some point in time is given by the formula: Where Fi is the set of pages that page i links to, and Bj is the set of pages that link to page j. The PageRank score for node j is defined as this probability: PR (j) =P (j). Because equation (1) is recursive, it must be iteratively evaluated until P (j) converges (typically, the initial distribution for P (j) is uniform). The intuition is, because a random surfer would end up at the page more frequently, it is likely a better page. An alternative view for equation (1) is that each page is assigned a quality, P (j). A page "gives" an equal share of its quality to each page it points to. PageRank is computationally expensive. Our collection of 5 billion pages contains approximately 370 billion links. Computing PageRank requires iterating over these billions of links multiple times (until convergence). It requires large amounts of memory (or very smart caching schemes that slow the computation down even further), and if spread across multiple machines, requires significant communication between them. Though much work has been done on optimizing the PageRank computation (see e.g., [25] and [6]), it remains a relatively slow, computationally expensive property to compute. 3. RANKNET Much work in machine learning has been done on the problems of classification and regression. Let X = {Xi} be a collection of feature vectors (typically, a feature is any real valued number), and Y = {yi} be a collection of associated classes, where yi is the class of the object described by feature vector xi. The classification problem is to learn a function f that maps yi = f (Xi), for all i. When yi is real-valued as well, this is called regression. Static ranking can be seen as a regression problem. If we let Xi represent features of page i, and yi be a value (say, the rank) for each page, we could learn a regression function that mapped each page's features to their rank. However, this over-constrains the problem we wish to solve. All we really care about is the order of the pages, not the actual value assigned to them. Recent work on this ranking problem [7] [13] [18] directly attempts to optimize the ordering of the objects, rather than the value assigned to them. For these, let Z ={<i, j>} be a collection of pairs of items, where item i should be assigned a higher value than item j. The goal of the ranking problem, then, is to learn a function f such that, Note that, as with learning a regression function, the result of this process is a function (f) that maps feature vectors to real values. This function can still be applied anywhere that a regressionlearned function could be applied. The only difference is the technique used to learn the function. By directly optimizing the ordering of objects, these methods are able to learn a function that does a better job of ranking than do regression techniques. We used RankNet [7], one of the aforementioned techniques for learning ranking functions, to learn our static rank function. RankNet is a straightforward modification to the standard neural network back-prop algorithm. As with back-prop, RankNet attempts to minimize the value of a cost function by adjusting each weight in the network according to the gradient of the cost function with respect to that weight. The difference is that, while a typical neural network cost function is based on the difference between the network output and the desired output, the RankNet cost function is based on the difference between a pair of network outputs. That is, for each pair of feature vectors <i, j> in the training set, RankNet computes the network outputs oi and oj. Since vector i is supposed to be ranked higher than vector j, the larger is oj-oi, the larger the cost. RankNet also allows the pairs in Z to be weighted with a confidence (posed as the probability that the pair satisfies the ordering induced by the ranking function). In this paper, we used a probability of one for all pairs. In the next section, we will discuss the features used in our feature vectors, xi. 4. FEATURES To apply RankNet (or other machine learning techniques) to the ranking problem, we needed to extract a set of features from each page. We divided our feature set into four, mutually exclusive, categories: page-level (Page), domain-level (Domain), anchor text and inlinks (Anchor), and popularity (Popularity). We also optionally used the PageRank of a page as a feature. Below, we describe each of these feature categories in more detail. PageRank We computed PageRank on a Web graph of 5 billion crawled pages (and 20 billion known URLs linked to by these pages). This represents a significant portion of the Web, and is approximately the same number of pages as are used by Google, Yahoo, and MSN for their search engines. Because PageRank is a graph-based algorithm, it is important that it be run on as large a subset of the Web as possible. Most previous studies on PageRank used subsets of the Web that are significantly smaller (e.g. the TREC VLC2 corpus, used by many, contains only 19 million pages) We computed PageRank using the standard value of 0.85 for α. Popularity Another feature we used is the actual popularity of a Web page, measured as the number of times that it has been visited by users over some period of time. We have access to such data from users who have installed the MSN toolbar and have opted to provide it to MSN. The data is aggregated into a count, for each Web page, of the number of users who viewed that page. Though popularity data is generally unavailable, there are two other sources for it. The first is from proxy logs. For example, a university that requires its students to use a proxy has a record of all the pages they have visited while on campus. Unfortunately, proxy data is quite biased and relatively small. Another source, internal to search engines, are records of which results their users clicked on. Such data was used by the search engine "Direct Hit", and has recently been explored for dynamic ranking purposes [20]. An advantage of the toolbar data over this is that it contains information about URL visits that are not just the result of a search. The raw popularity is processed into a number of features such as the number of times a page was viewed and the number of times any page in the domain was viewed. More details are provided in section 5.5. Anchor text and inlinks These features are based on the information associated with links to the page in question. It includes features such as the total amount of text in links pointing to the page ("anchor text"), the number of unique words in that text, etc. . Page This category consists of features which may be determined by looking at the page (and its URL) alone. We used only eight, simple features such as the number of words in the body, the frequency of the most common term, etc. . Domain This category contains features that are computed as averages across all pages in the domain. For example, the average number of outlinks on any page and the average PageRank. Many of these features have been used by others for ranking Web pages, particularly the anchor and page features. As mentioned, the evaluation is typically for dynamic ranking, and we wish to evaluate the use of them for static ranking. Also, to our knowledge, this is the first study on the use of actual page visitation popularity for static ranking. The closest similar work is on using click-through behavior (that is, which search engine results the users click on) to affect dynamic ranking (see e.g., [20]). Because we use a wide variety of features to come up with a static ranking, we refer to this as fRank (for feature-based ranking). fRank uses RankNet and the set of features described in this section to learn a ranking function for Web pages. Unless otherwise specified, fRank was trained with all of the features. 5. EXPERIMENTS In this section, we will demonstrate that we can out perform PageRank by applying machine learning to a straightforward set of features. Before the results, we first discuss the data, the performance metric, and the training method. 5.1 Data In order to evaluate the quality of a static ranking, we needed a "gold standard" defining the correct ordering for a set of pages. For this, we employed a dataset which contains human judgments for 28000 queries. For each query, a number of results are manually assigned a rating, from 0 to 4, by human judges. The rating is meant to be a measure of how relevant the result is for the query, where 0 means "poor" and 4 means "excellent". There are approximately 500k judgments in all, or an average of 18 ratings per query. The queries are selected by randomly choosing queries from among those issued to the MSN search engine. The probability that a query is selected is proportional to its frequency among all of the queries. As a result, common queries are more likely to be judged than uncommon queries. As an example of how diverse the queries are, the first four queries in the training set are "chef schools", "chicagoland speedway", "eagles fan club", and "Turkish culture". The documents selected for judging are those that we expected would, on average, be reasonably relevant (for example, the top ten documents returned by MSN's search engine). This provides significantly more information than randomly selecting documents on the Web, the vast majority of which would be irrelevant to a given query. Because of this process, the judged pages tend to be of higher quality than the average page on the Web, and tend to be pages that will be returned for common search queries. This bias is good when evaluating the quality of static ranking for the purposes of index ordering and returning relevant documents. This is because the most important portion of the index to be well-ordered and relevant is the portion that is frequently returned for search queries. Because of this bias, however, the results in this paper are not applicable to crawl prioritization. In order to obtain experimental results on crawl prioritization, we would need ratings on a random sample of Web pages. To convert the data from query-dependent to query-independent, we simply removed the query, taking the maximum over judgments for a URL that appears in more than one query. The reasoning behind this is that a page that is relevant for some query and irrelevant for another is probably a decent page and should have a high static rank. Because we evaluated the pages on queries that occur frequently, our data indicates the correct index ordering, and assigns high value to pages that are likely to be relevant to a common query. We randomly assigned queries to a training, validation, or test set, such that they contained 84%, 8%, and 8% of the queries, respectively. Each set contains all of the ratings for a given query, and no query appears in more than one set. The training set was used to train fRank. The validation set was used to select the model that had the highest performance. The test set was used for the final results. This data gives us a query-independent ordering of pages. The goal for a static ranking algorithm will be to reproduce this ordering as closely as possible. In the next section, we describe the measure we used to evaluate this. 5.2 Measure We chose to use pairwise accuracy to evaluate the quality of a static ranking. The pairwise accuracy is the fraction of time that the ranking algorithm and human judges agree on the ordering of a pair of Web pages. If S (x) is the static ranking assigned to page x, and H (x) is the human judgment of relevance for x, then consider the following sets: The pairwise accuracy is the portion of Hp that is also contained in Sp: pairwise accuracy This measure was chosen for two reasons. First, the discrete human judgments provide only a partial ordering over Web pages, making it difficult to apply a measure such as the Spearman rank order correlation coefficient (in the pairwise accuracy measure, a pair of documents with the same human judgment does not affect the score). Second, the pairwise accuracy has an intuitive meaning: it is the fraction of pairs of documents that, when the humans claim one is better than the other, the static rank algorithm orders them correctly. 5.3 Method We trained fRank (a RankNet based neural network) using the following parameters. We used a fully connected 2 layer network. The hidden layer had 10 hidden nodes. The input weights to this layer were all initialized to be zero. The output "layer" (just a single node) weights were initialized using a uniform random distribution in the range [-0.1, 0.1]. We used tanh as the transfer function from the inputs to the hidden layer, and a linear function from the hidden layer to the output. The cost function is the pairwise cross entropy cost function as discussed in section 3. The features in the training set were normalized to have zero mean and unit standard deviation. The same linear transformation was then applied to the features in the validation and test sets. For training, we presented the network with 5 million pairings of pages, where one page had a higher rating than the other. The pairings were chosen uniformly at random (with replacement) from all possible pairings. When forming the pairs, we ignored the magnitude of the difference between the ratings (the rating spread) for the two URLs. Hence, the weight for each pair was constant (one), and the probability of a pair being selected was independent of its rating spread. We trained the network for 30 epochs. On each epoch, the training pairs were randomly shuffled. The initial training rate was 0.001. At each epoch, we checked the error on the training set. If the error had increased, then we decreased the training rate, under the hypothesis that the network had probably overshot. The training rate at each epoch was thus set to: Training rate = ε + 1 Where κ is the initial rate (0.001), and ε is the number of times the training set error has increased. After each epoch, we measured the performance of the neural network on the validation set, using 1 million pairs (chosen randomly with replacement). The network with the highest pairwise accuracy on the validation set was selected, and then tested on the test set. We report the pairwise accuracy on the test set, calculated using all possible pairs. These parameters were determined and fixed before the static rank experiments in this paper. In particular, the choice of initial training rate, number of epochs, and training rate decay function were taken directly from Burges et al [7]. Though we had the option of preprocessing any of the features before they were input to the neural network, we refrained from doing so on most of them. The only exception was the popularity features. As with most Web phenomenon, we found that the distribution of site popularity is Zipfian. To reduce the dynamic range, and hopefully make the feature more useful, we presented the network with both the unpreprocessed, as well as the logarithm, of the popularity features (As with the others, the logarithmic feature values were also normalized to have zero mean and unit standard deviation). Applying fRank to a document is computationally efficient, taking time that is only linear in the number of input features; it is thus within a constant factor of other simple machine learning methods such as naïve Bayes. In our experiments, computing the fRank for all five billion Web pages was approximately 100 times faster than computing the PageRank for the same set. 5.4 Results As Table 1 shows, fRank significantly outperforms PageRank for the purposes of static ranking. With a pairwise accuracy of 67.4%, fRank more than doubles the accuracy of PageRank (relative to the baseline of 50%, which is the accuracy that would be achieved by a random ordering of Web pages). Note that one of fRank's input features is the PageRank of the page, so we would expect it to perform no worse than PageRank. The significant increase in accuracy implies that the other features (anchor, popularity, etc.) do in fact contain useful information regarding the overall quality of a page. Table 1: Basic Results There are a number of decisions that go into the computation of PageRank, such as how to deal with pages that have no outlinks, the choice of α, numeric precision, convergence threshold, etc. . We were able to obtain a computation of PageRank from a completely independent implementation (provided by Marc Najork) that varied somewhat in these parameters. It achieved a pairwise accuracy of 56.52%, nearly identical to that obtained by our implementation. We thus concluded that the quality of the PageRank is not sensitive to these minor variations in algorithm, nor was PageRank's low accuracy due to problems with our implementation of it. We also wanted to find how well each feature set performed. To answer this, for each feature set, we trained and tested fRank using only that set of features. The results are shown in Table 2. As can be seen, every single feature set individually outperformed PageRank on this test. Perhaps the most interesting result is that the Page-level features had the highest performance out of all the feature sets. This is surprising because these are features that do not depend on the overall graph structure of the Web, nor even on what pages point to a given page. This is contrary to the common belief that the Web graph structure is the key to finding a good static ranking of Web pages. Table 2: Results for individual feature sets. Because we are using a two-layer neural network, the features in the learned network can interact with each other in interesting, nonlinear ways. This means that a particular feature that appears to have little value in isolation could actually be very important when used in combination with other features. To measure the final contribution of a feature set, in the context of all the other features, we performed an ablation study. That is, for each set of features, we trained a network to contain all of the features except that set. We then compared the performance of the resulting network to the performance of the network with all of the features. Table 3 shows the results of this experiment, where the "decrease in accuracy" is the difference in pairwise accuracy between the network trained with all of the features, and the network missing the given feature set. Table 3: Ablation study. Shown is the decrease in accuracy when we train a network that has all but the given set of features. The last line is shows the effect of removing the anchor, PageRank, and domain features, hence a model containing no network or link-based information whatsoever. The results of the ablation study are consistent with the individual feature set study. Both show that the most important feature set is the Page-level feature set, and the second most important is the popularity feature set. Finally, we wished to see how the performance of fRank improved as we added features; we wanted to find at what point adding more feature sets became relatively useless. Beginning with no features, we greedily added the feature set that improved performance the most. The results are shown in Table 4. For example, the fourth line of the table shows that fRank using the page, popularity, and anchor features outperformed any network that used the page, popularity, and some other feature set, and that the performance of this network was 67.25%. Table 4: fRank performance as feature sets are added. At each row, the feature set that gave the greatest increase in accuracy was added to the list of features (i.e., we conducted a greedy search over feature sets). Table 5: Top ten URLs for PageRank vs. fRank Finally, we present a qualitative comparison of PageRank vs. fRank. In Table 5 are the top ten URLs returned for PageRank and for fRank. PageRank's results are heavily weighted towards technology sites. It contains two QuickTime URLs (Apple's video playback software), as well as Internet Explorer and FireFox URLs (both of which are Web browsers). fRank, on the other hand, contains more consumer-oriented sites such as American Express, Target, Dell, etc. . PageRank's bias toward technology can be explained through two processes. First, there are many pages with "buttons" at the bottom suggesting that the site is optimized for Internet Explorer, or that the visitor needs QuickTime. These generally link back to, in these examples, the Internet Explorer and QuickTime download sites. Consequently, PageRank ranks those pages highly. Though these pages are important, they are not as important as it may seem by looking at the link structure alone. One fix for this is to add information about the link to the PageRank computation, such as the size of the text, whether it was at the bottom of the page, etc. . The other bias comes from the fact that the population of Web site authors is different than the population of Web users. Web authors tend to be technologically-oriented, and thus their linking behavior reflects those interests. fRank, by knowing the actual visitation popularity of a site (the popularity feature set), is able to eliminate some of that bias. It has the ability to depend more on where actual Web users visit rather than where the Web site authors have linked. The results confirm that fRank outperforms PageRank in pairwise accuracy. The two most important feature sets are the page and popularity features. This is surprising, as the page features consisted only of a few (8) simple features. Further experiments found that, of the page features, those based on the text of the page (as opposed to the URL) performed the best. In the next section, we explore the popularity feature in more detail. 5.5 Popularity Data As mentioned in section 4, our popularity data came from MSN toolbar users. For privacy reasons, we had access only to an aggregate count of, for each URL, how many times it was visited by any toolbar user. This limited the possible features we could derive from this data. For possible extensions, see section 6.3, future work. For each URL in our train and test sets, we provided a feature to fRank which was how many times it had been visited by a toolbar user. However, this feature was quite noisy and sparse, particularly for URLs with query parameters (e.g., http://search.msn.com/results.aspx?q=machine+learning&form=QBHP). One solution was to provide an additional feature which was the number of times any URL at the given domain was visited by a toolbar user. Adding this feature dramatically improved the performance of fRank. We took this one step further and used the built-in hierarchical structure of URLs to construct many levels of backoff between the full URL and the domain. We did this by using the set of features shown in Table 6. Table 6: URL functions used to compute the Popularity feature set. Each URL was assigned one feature for each function shown in the table. The value of the feature was the count of the number of times a toolbar user visited a URL, where the function applied to that URL matches the function applied to the URL in question. For example, a user's visit to cnn.com/2005/sports.html would increment the Domain and Domain +1 features for the URL cnn.com/2005/tech/wikipedia.html. As seen in Table 7, adding the domain counts significantly improved the quality of the popularity feature, and adding the numerous backoff functions listed in Table 6 improved the accuracy even further. Table 7: Effect of adding backoff to the popularity feature set Backing off to subsets of the URL is one technique for dealing with the sparsity of data. It is also informative to see how the performance of fRank depends on the amount of popularity data that we have collected. In Figure 1 we show the performance of fRank trained with only the popularity feature set vs. the amount of data we have for the popularity feature set. Each day, we receive additional popularity data, and as can be seen in the plot, this increases the performance of fRank. The relation is logarithmic: doubling the amount of popularity data provides a constant improvement in pairwise accuracy. In summary, we have found that the popularity features provide a useful boost to the overall fRank accuracy. Gathering more popularity data, as well as employing simple backoff strategies, improve this boost even further. 5.6 Summary of Results The experiments provide a number of conclusions. First, fRank performs significantly better than PageRank, even without any information about the Web graph. Second, the page level and popularity features were the most significant contributors to pairwise accuracy. Third, by collecting more popularity data, we can continue to improve fRank's performance. The popularity data provides two benefits to fRank. First, we see that qualitatively, fRank's ordering of Web pages has a more favorable bias than PageRank's. fRank's ordering seems to correspond to what Web users, rather than Web page authors, prefer. Second, the popularity data is more timely than PageRank's link information. The toolbar provides information about which Web pages people find interesting right now, whereas links are added to pages more slowly, as authors find the time and interest. 6. RELATED AND FUTURE WORK 6.1 Improvements to PageRank Since the original PageRank paper, there has been work on improving it. Much of that work centers on speeding up and parallelizing the computation [15] [25]. One recognized problem with PageRank is that of topic drift: A page about "dogs" will have high PageRank if it is linked to by many pages that themselves have high rank, regardless of their topic. In contrast, a search engine user looking for good pages about dogs would likely prefer to find pages that are pointed to by many pages that are themselves about dogs. Hence, a link that is "on topic" should have higher weight than a link that is not. Richardson and Domingos's Query Dependent PageRank [29] and Haveliwala's Topic-Sensitive PageRank [16] are two approaches that tackle this problem. Other variations to PageRank include differently weighting links for inter - vs. intra-domain links, adding a backwards step to the random surfer to simulate the "back" button on most browsers [24] and modifying the jump probability (α) [3]. See Langville and Meyer [23] for a good survey of these, and other modifications to PageRank. 6.2 Other related work PageRank is not the only link analysis algorithm used for ranking Web pages. The most well-known other is HITS [22], which is used by the Teoma search engine [30]. HITS produces a list of hubs and authorities, where hubs are pages that point to many Figure 1: Relation between the amount of popularity data and the performance of the popularity feature set. Note the x-axis is a logarithmic scale. authority pages, and authorities are pages that are pointed to by many hubs. Previous work has shown HITS to perform comparably to PageRank [1]. One field of interest is that of static index pruning (see e.g., Carmel et al. [8]). Static index pruning methods reduce the size of the search engine's index by removing documents that are unlikely to be returned by a search query. The pruning is typically done based on the frequency of query terms. Similarly, Pandey and Olston [28] suggest crawling pages frequently if they are likely to incorrectly appear (or not appear) as a result of a search. Similar methods could be incorporated into the static rank (e.g., how many frequent queries contain words found on this page). Others have investigated the effect that PageRank has on the Web at large [9]. They argue that pages with high PageRank are more likely to be found by Web users, thus more likely to be linked to, and thus more likely to maintain a higher PageRank than other pages. The same may occur for the popularity data. If we increase the ranking for popular pages, they are more likely to be clicked on, thus further increasing their popularity. Cho et al. [10] argue that a more appropriate measure of Web page quality would depend on not only the current link structure of the Web, but also on the change in that link structure. The same technique may be applicable to popularity data: the change in popularity of a page may be more informative than the absolute popularity. One interesting related work is that of Ivory and Hearst [19]. Their goal was to build a model of Web sites that are considered high quality from the perspective of "content, structure and navigation, visual design, functionality, interactivity, and overall experience". They used over 100 page level features, as well as features encompassing the performance and structure of the site. This let them qualitatively describe the qualities of a page that make it appear attractive (e.g., rare use of italics, at least 9 point font, ...), and (in later work) to build a system that assists novel Web page authors in creating quality pages by evaluating it according to these features. The primary differences between this work and ours are the goal (discovering what constitutes a good Web page vs. ordering Web pages for the purposes of Web search), the size of the study (they used a dataset of less than 6000 pages vs. our set of 468,000), and our comparison with PageRank. Nevertheless, their work provides insights to additional useful static features that we could incorporate into fRank in the future. Recent work on incorporating novel features into dynamic ranking includes that by Joachims et al. [21], who investigate the use of implicit feedback from users, in the form of which search engine results are clicked on. Craswell et al. [11] present a method for determining the best transformation to apply to query independent features (such as those used in this paper) for the purposes of improving dynamic ranking. Other work, such as Boyan et al. [4] and Bartell et al. [2] apply machine learning for the purposes of improving the overall relevance of a search engine (i.e., the dynamic ranking). They do not apply their techniques to the problem of static ranking. 6.3 Future work There are many ways in which we would like to extend this work. First, fRank uses only a small number of features. We believe we could achieve even more significant results with more features. In particular the existence, or lack thereof, of certain words could prove very significant (for instance, "under construction" probably signifies a low quality page). Other features could include the number of images on a page, size of those images, number of layout elements (tables, divs, and spans), use of style sheets, conforming to W3C standards (like XHTML 1.0 Strict), background color of a page, etc. . Many pages are generated dynamically, the contents of which may depend on parameters in the URL, the time of day, the user visiting the site, or other variables. For such pages, it may be useful to apply the techniques found in [26] to form a static approximation for the purposes of extracting features. The resulting grammar describing the page could itself be a source of additional features describing the complexity of the page, such as how many non-terminal nodes it has, the depth of the grammar tree, etc. fRank allows one to specify a confidence in each pairing of documents. In the future, we will experiment with probabilities that depend on the difference in human judgments between the two items in the pair. For example, a pair of documents where one was rated 4 and the other 0 should have a higher confidence than a pair of documents rated 3 and 2. The experiments in this paper are biased toward pages that have higher than average quality. Also, fRank with all of the features can only be applied to pages that have already been crawled. Thus, fRank is primarily useful for index ordering and improving relevance, not for directing the crawl. We would like to investigate a machine learning approach for crawl prioritization as well. It may be that a combination of methods is best: for example, using PageRank to select the best 5 billion of the 20 billion pages on the Web, then using fRank to order the index and affect search relevancy. Another interesting direction for exploration is to incorporate fRank and page-level features directly into the PageRank computation itself. Work on biasing the PageRank jump vector [16], and transition matrix [29], have demonstrated the feasibility and advantages of such an approach. There is reason to believe that a direct application of [29], using the fRank of a page for its "relevance", could lead to an improved overall static rank. Finally, the popularity data can be used in other interesting ways. The general surfing and searching habits of Web users varies by time of day. Activity in the morning, daytime, and evening are often quite different (e.g., reading the news, solving problems, and accessing entertainment, respectively). We can gain insight into these differences by using the popularity data, divided into segments of the day. When a query is issued, we would then use the popularity data matching the time of query in order to do the ranking of Web pages. We also plan to explore popularity features that use more than just the counts of how often a page was visited. For example, how long users tended to dwell on a page, did they leave the page by clicking a link or by hitting the back button, etc. . Fox et al. did a study that showed that features such as this can be valuable for the purposes of dynamic ranking [14]. Finally, the popularity data could be used as the label rather than as a feature. Using fRank in this way to predict the popularity of a page may useful for the tasks of relevance, efficiency, and crawl priority. There is also significantly more popularity data than human labeled data, potentially enabling more complex machine learning methods, and significantly more features. 7. CONCLUSIONS A good static ranking is an important component for today's search engines and information retrieval systems. We have demonstrated that PageRank does not provide a very good static ranking; there are many simple features that individually out perform PageRank. By combining many static features, fRank achieves a ranking that has a significantly higher pairwise accuracy than PageRank alone. A qualitative evaluation of the top documents shows that fRank is less technology-biased than PageRank; by using popularity data, it is biased toward pages that Web users, rather than Web authors, visit. The machine learning component of fRank gives it the additional benefit of being more robust against spammers, and allows it to leverage further developments in the machine learning community in areas such as adversarial classification. We have only begun to explore the options, and believe that significant strides can be made in the area of static ranking by further experimentation with additional features, other machine learning techniques, and additional sources of data.
Beyond PageRank: Machine Learning for Static Ranking ABSTRACT Since the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random). 1. INTRODUCTION Over the past decade, the Web has grown exponentially in size. Unfortunately, this growth has not been isolated to good-quality pages. The number of incorrect, spamming, and malicious (e.g., phishing) sites has also grown rapidly. The sheer number of both good and bad pages on the Web has led to an increasing reliance on search engines for the discovery of useful information. Users rely on search engines not only to return pages related to their search query, but also to separate the good from the bad, and order results so that the best pages are suggested first. To date, most work on Web page ranking has focused on improving the ordering of the results returned to the user (querydependent ranking, or dynamic ranking). However, having a good query-independent ranking (static ranking) is also crucially important for a search engine. A good static ranking algorithm provides numerous benefits: • Relevance: The static rank of a page provides a general indicator to the overall quality of the page. This is a useful input to the dynamic ranking algorithm. • Efficiency: Typically, the search engine's index is ordered by static rank. By traversing the index from highquality to low-quality pages, the dynamic ranker may abort the search when it determines that no later page will have as high of a dynamic rank as those already found. The more accurate the static rank, the better this early-stopping ability, and hence the quicker the search engine may respond to queries. • Crawl Priority: The Web grows and changes as quickly as search engines can crawl it. Search engines need a way to prioritize their crawl--to determine which pages to recrawl, how frequently, and how often to seek out new pages. Among other factors, the static rank of a page is used to determine this prioritization. A better static rank thus provides the engine with a higher quality, more upto-date index. Google is often regarded as the first commercially successful search engine. Their ranking was originally based on the PageRank algorithm [5] [27]. Due to this (and possibly due to Google's promotion of PageRank to the public), PageRank is widely regarded as the best method for the static ranking of Web pages. Though PageRank has historically been thought to perform quite well, there has yet been little academic evidence to support this claim. Even worse, there has recently been work showing that PageRank may not perform any better than other simple measures on certain tasks. Upstill et al. have found that for the task of finding home pages, the number of pages linking to a page and the type of URL were as, or more, effective than PageRank [32]. They found similar results for the task of finding high quality companies [31]. PageRank has also been used in systems for TREC's "very large collection" and "Web track" competitions, but with much less success than had been expected [17]. Finally, Amento et al. [1] found that simple features, such as the number of pages on a site, performed as well as PageRank. Despite these, the general belief remains among many, both academic and in the public, that PageRank is an essential factor for a good static rank. Failing this, it is still assumed that using the link structure is crucial, in the form of the number of inlinks or the amount of anchor text. In this paper, we show there are a number of simple url - or pagebased features that significantly outperform PageRank (for the purposes of statically ranking Web pages) despite ignoring the structure of the Web. We combine these and other static features using machine learning to achieve a ranking system that is significantly better than PageRank (in pairwise agreement with human labels). A machine learning approach for static ranking has other advantages besides the quality of the ranking. Because the measure consists of many features, it is harder for malicious users to manipulate it (i.e., to raise their page's static rank to an undeserved level through questionable techniques, also known as Web spamming). This is particularly true if the feature set is not known. In contrast, a single measure like PageRank can be easier to manipulate because spammers need only concentrate on one goal: how to cause more pages to point to their page. With an algorithm that learns, a feature that becomes unusable due to spammer manipulation will simply be reduced or removed from the final computation of rank. This flexibility allows a ranking system to rapidly react to new spamming techniques. A machine learning approach to static ranking is also able to take advantage of any advances in the machine learning field. For example, recent work on adversarial classification [12] suggests that it may be possible to explicitly model the Web page spammer's (the adversary) actions, adjusting the ranking model in advance of the spammer's attempts to circumvent it. Another example is the elimination of outliers in constructing the model, which helps reduce the effect that unique sites may have on the overall quality of the static rank. By moving static ranking to a machine learning framework, we not only gain in accuracy, but also gain in the ability to react to spammer's actions, to rapidly add new features to the ranking algorithm, and to leverage advances in the rapidly growing field of machine learning. Finally, we believe there will be significant advantages to using this technique for other domains, such as searching a local hard drive or a corporation's intranet. These are domains where the link structure is particularly weak (or non-existent), but there are other domain-specific features that could be just as powerful. For example, the author of an intranet page and his/her position in the organization (e.g., CEO, manager, or developer) could provide significant clues as to the importance of that page. A machine learning approach thus allows rapid development of a good static algorithm in new domains. This paper's contribution is a systematic study of static features, including PageRank, for the purposes of (statically) ranking Web pages. Previous studies on PageRank typically used subsets of the Web that are significantly smaller (e.g., the TREC VLC2 corpus, used by many, contains only 19 million pages). Also, the performance of PageRank and other static features has typically been evaluated in the context of a complete system for dynamic ranking, or for other tasks such as question answering. In contrast, we explore the use of PageRank and other features for the direct task of statically ranking Web pages. We first briefly describe the PageRank algorithm. In Section 3 we introduce RankNet, the machine learning technique used to combine static features into a final ranking. Section 4 describes the static features. The heart of the paper is in Section 5, which presents our experiments and results. We conclude with a discussion of related and future work. 2. PAGERANK 3. RANKNET 4. FEATURES 5. EXPERIMENTS 5.1 Data 5.2 Measure 5.3 Method 5.4 Results 5.5 Popularity Data 5.6 Summary of Results 6. RELATED AND FUTURE WORK 6.1 Improvements to PageRank Since the original PageRank paper, there has been work on improving it. Much of that work centers on speeding up and parallelizing the computation [15] [25]. One recognized problem with PageRank is that of topic drift: A page about "dogs" will have high PageRank if it is linked to by many pages that themselves have high rank, regardless of their topic. In contrast, a search engine user looking for good pages about dogs would likely prefer to find pages that are pointed to by many pages that are themselves about dogs. Hence, a link that is "on topic" should have higher weight than a link that is not. Richardson and Domingos's Query Dependent PageRank [29] and Haveliwala's Topic-Sensitive PageRank [16] are two approaches that tackle this problem. Other variations to PageRank include differently weighting links for inter - vs. intra-domain links, adding a backwards step to the random surfer to simulate the "back" button on most browsers [24] and modifying the jump probability (α) [3]. See Langville and Meyer [23] for a good survey of these, and other modifications to PageRank. 6.2 Other related work PageRank is not the only link analysis algorithm used for ranking Web pages. The most well-known other is HITS [22], which is used by the Teoma search engine [30]. HITS produces a list of hubs and authorities, where hubs are pages that point to many Figure 1: Relation between the amount of popularity data and the performance of the popularity feature set. Note the x-axis is a logarithmic scale. authority pages, and authorities are pages that are pointed to by many hubs. Previous work has shown HITS to perform comparably to PageRank [1]. One field of interest is that of static index pruning (see e.g., Carmel et al. [8]). Static index pruning methods reduce the size of the search engine's index by removing documents that are unlikely to be returned by a search query. The pruning is typically done based on the frequency of query terms. Similarly, Pandey and Olston [28] suggest crawling pages frequently if they are likely to incorrectly appear (or not appear) as a result of a search. Similar methods could be incorporated into the static rank (e.g., how many frequent queries contain words found on this page). Others have investigated the effect that PageRank has on the Web at large [9]. They argue that pages with high PageRank are more likely to be found by Web users, thus more likely to be linked to, and thus more likely to maintain a higher PageRank than other pages. The same may occur for the popularity data. If we increase the ranking for popular pages, they are more likely to be clicked on, thus further increasing their popularity. Cho et al. [10] argue that a more appropriate measure of Web page quality would depend on not only the current link structure of the Web, but also on the change in that link structure. The same technique may be applicable to popularity data: the change in popularity of a page may be more informative than the absolute popularity. One interesting related work is that of Ivory and Hearst [19]. Their goal was to build a model of Web sites that are considered high quality from the perspective of "content, structure and navigation, visual design, functionality, interactivity, and overall experience". They used over 100 page level features, as well as features encompassing the performance and structure of the site. This let them qualitatively describe the qualities of a page that make it appear attractive (e.g., rare use of italics, at least 9 point font, ...), and (in later work) to build a system that assists novel Web page authors in creating quality pages by evaluating it according to these features. The primary differences between this work and ours are the goal (discovering what constitutes a good Web page vs. ordering Web pages for the purposes of Web search), the size of the study (they used a dataset of less than 6000 pages vs. our set of 468,000), and our comparison with PageRank. Nevertheless, their work provides insights to additional useful static features that we could incorporate into fRank in the future. Recent work on incorporating novel features into dynamic ranking includes that by Joachims et al. [21], who investigate the use of implicit feedback from users, in the form of which search engine results are clicked on. Craswell et al. [11] present a method for determining the best transformation to apply to query independent features (such as those used in this paper) for the purposes of improving dynamic ranking. Other work, such as Boyan et al. [4] and Bartell et al. [2] apply machine learning for the purposes of improving the overall relevance of a search engine (i.e., the dynamic ranking). They do not apply their techniques to the problem of static ranking. 6.3 Future work There are many ways in which we would like to extend this work. First, fRank uses only a small number of features. We believe we could achieve even more significant results with more features. In particular the existence, or lack thereof, of certain words could prove very significant (for instance, "under construction" probably signifies a low quality page). Other features could include the number of images on a page, size of those images, number of layout elements (tables, divs, and spans), use of style sheets, conforming to W3C standards (like XHTML 1.0 Strict), background color of a page, etc. . Many pages are generated dynamically, the contents of which may depend on parameters in the URL, the time of day, the user visiting the site, or other variables. For such pages, it may be useful to apply the techniques found in [26] to form a static approximation for the purposes of extracting features. The resulting grammar describing the page could itself be a source of additional features describing the complexity of the page, such as how many non-terminal nodes it has, the depth of the grammar tree, etc. fRank allows one to specify a confidence in each pairing of documents. In the future, we will experiment with probabilities that depend on the difference in human judgments between the two items in the pair. For example, a pair of documents where one was rated 4 and the other 0 should have a higher confidence than a pair of documents rated 3 and 2. The experiments in this paper are biased toward pages that have higher than average quality. Also, fRank with all of the features can only be applied to pages that have already been crawled. Thus, fRank is primarily useful for index ordering and improving relevance, not for directing the crawl. We would like to investigate a machine learning approach for crawl prioritization as well. It may be that a combination of methods is best: for example, using PageRank to select the best 5 billion of the 20 billion pages on the Web, then using fRank to order the index and affect search relevancy. Another interesting direction for exploration is to incorporate fRank and page-level features directly into the PageRank computation itself. Work on biasing the PageRank jump vector [16], and transition matrix [29], have demonstrated the feasibility and advantages of such an approach. There is reason to believe that a direct application of [29], using the fRank of a page for its "relevance", could lead to an improved overall static rank. Finally, the popularity data can be used in other interesting ways. The general surfing and searching habits of Web users varies by time of day. Activity in the morning, daytime, and evening are often quite different (e.g., reading the news, solving problems, and accessing entertainment, respectively). We can gain insight into these differences by using the popularity data, divided into segments of the day. When a query is issued, we would then use the popularity data matching the time of query in order to do the ranking of Web pages. We also plan to explore popularity features that use more than just the counts of how often a page was visited. For example, how long users tended to dwell on a page, did they leave the page by clicking a link or by hitting the back button, etc. . Fox et al. did a study that showed that features such as this can be valuable for the purposes of dynamic ranking [14]. Finally, the popularity data could be used as the label rather than as a feature. Using fRank in this way to predict the popularity of a page may useful for the tasks of relevance, efficiency, and crawl priority. There is also significantly more popularity data than human labeled data, potentially enabling more complex machine learning methods, and significantly more features. 7. CONCLUSIONS A good static ranking is an important component for today's search engines and information retrieval systems. We have demonstrated that PageRank does not provide a very good static ranking; there are many simple features that individually out perform PageRank. By combining many static features, fRank achieves a ranking that has a significantly higher pairwise accuracy than PageRank alone. A qualitative evaluation of the top documents shows that fRank is less technology-biased than PageRank; by using popularity data, it is biased toward pages that Web users, rather than Web authors, visit. The machine learning component of fRank gives it the additional benefit of being more robust against spammers, and allows it to leverage further developments in the machine learning community in areas such as adversarial classification. We have only begun to explore the options, and believe that significant strides can be made in the area of static ranking by further experimentation with additional features, other machine learning techniques, and additional sources of data.
Beyond PageRank: Machine Learning for Static Ranking ABSTRACT Since the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random). 1. INTRODUCTION Unfortunately, this growth has not been isolated to good-quality pages. The number of incorrect, spamming, and malicious (e.g., phishing) sites has also grown rapidly. The sheer number of both good and bad pages on the Web has led to an increasing reliance on search engines for the discovery of useful information. Users rely on search engines not only to return pages related to their search query, but also to separate the good from the bad, and order results so that the best pages are suggested first. To date, most work on Web page ranking has focused on improving the ordering of the results returned to the user (querydependent ranking, or dynamic ranking). However, having a good query-independent ranking (static ranking) is also crucially important for a search engine. A good static ranking algorithm provides numerous benefits: • Relevance: The static rank of a page provides a general indicator to the overall quality of the page. This is a useful input to the dynamic ranking algorithm. • Efficiency: Typically, the search engine's index is ordered by static rank. By traversing the index from highquality to low-quality pages, the dynamic ranker may abort the search when it determines that no later page will have as high of a dynamic rank as those already found. The more accurate the static rank, the better this early-stopping ability, and hence the quicker the search engine may respond to queries. • Crawl Priority: The Web grows and changes as quickly as search engines can crawl it. Search engines need a way to prioritize their crawl--to determine which pages to recrawl, how frequently, and how often to seek out new pages. Among other factors, the static rank of a page is used to determine this prioritization. A better static rank thus provides the engine with a higher quality, more upto-date index. Google is often regarded as the first commercially successful search engine. Their ranking was originally based on the PageRank algorithm [5] [27]. Due to this (and possibly due to Google's promotion of PageRank to the public), PageRank is widely regarded as the best method for the static ranking of Web pages. Though PageRank has historically been thought to perform quite well, there has yet been little academic evidence to support this claim. Even worse, there has recently been work showing that PageRank may not perform any better than other simple measures on certain tasks. Upstill et al. have found that for the task of finding home pages, the number of pages linking to a page and the type of URL were as, or more, effective than PageRank [32]. They found similar results for the task of finding high quality companies [31]. PageRank has also been used in systems for TREC's "very large collection" and "Web track" competitions, but with much less success than had been expected [17]. Finally, Amento et al. [1] found that simple features, such as the number of pages on a site, performed as well as PageRank. Despite these, the general belief remains among many, both academic and in the public, that PageRank is an essential factor for a good static rank. In this paper, we show there are a number of simple url - or pagebased features that significantly outperform PageRank (for the purposes of statically ranking Web pages) despite ignoring the structure of the Web. We combine these and other static features using machine learning to achieve a ranking system that is significantly better than PageRank (in pairwise agreement with human labels). A machine learning approach for static ranking has other advantages besides the quality of the ranking. Because the measure consists of many features, it is harder for malicious users to manipulate it (i.e., to raise their page's static rank to an undeserved level through questionable techniques, also known as Web spamming). This is particularly true if the feature set is not known. In contrast, a single measure like PageRank can be easier to manipulate because spammers need only concentrate on one goal: how to cause more pages to point to their page. With an algorithm that learns, a feature that becomes unusable due to spammer manipulation will simply be reduced or removed from the final computation of rank. This flexibility allows a ranking system to rapidly react to new spamming techniques. A machine learning approach to static ranking is also able to take advantage of any advances in the machine learning field. Another example is the elimination of outliers in constructing the model, which helps reduce the effect that unique sites may have on the overall quality of the static rank. Finally, we believe there will be significant advantages to using this technique for other domains, such as searching a local hard drive or a corporation's intranet. These are domains where the link structure is particularly weak (or non-existent), but there are other domain-specific features that could be just as powerful. A machine learning approach thus allows rapid development of a good static algorithm in new domains. This paper's contribution is a systematic study of static features, including PageRank, for the purposes of (statically) ranking Web pages. Previous studies on PageRank typically used subsets of the Web that are significantly smaller (e.g., the TREC VLC2 corpus, used by many, contains only 19 million pages). Also, the performance of PageRank and other static features has typically been evaluated in the context of a complete system for dynamic ranking, or for other tasks such as question answering. In contrast, we explore the use of PageRank and other features for the direct task of statically ranking Web pages. We first briefly describe the PageRank algorithm. In Section 3 we introduce RankNet, the machine learning technique used to combine static features into a final ranking. Section 4 describes the static features. We conclude with a discussion of related and future work. 6. RELATED AND FUTURE WORK 6.1 Improvements to PageRank Since the original PageRank paper, there has been work on improving it. Much of that work centers on speeding up and parallelizing the computation [15] [25]. One recognized problem with PageRank is that of topic drift: A page about "dogs" will have high PageRank if it is linked to by many pages that themselves have high rank, regardless of their topic. In contrast, a search engine user looking for good pages about dogs would likely prefer to find pages that are pointed to by many pages that are themselves about dogs. Richardson and Domingos's Query Dependent PageRank [29] and Haveliwala's Topic-Sensitive PageRank [16] are two approaches that tackle this problem. See Langville and Meyer [23] for a good survey of these, and other modifications to PageRank. 6.2 Other related work PageRank is not the only link analysis algorithm used for ranking Web pages. The most well-known other is HITS [22], which is used by the Teoma search engine [30]. HITS produces a list of hubs and authorities, where hubs are pages that point to many Figure 1: Relation between the amount of popularity data and the performance of the popularity feature set. authority pages, and authorities are pages that are pointed to by many hubs. Previous work has shown HITS to perform comparably to PageRank [1]. One field of interest is that of static index pruning (see e.g., Carmel et al. [8]). Static index pruning methods reduce the size of the search engine's index by removing documents that are unlikely to be returned by a search query. The pruning is typically done based on the frequency of query terms. Similarly, Pandey and Olston [28] suggest crawling pages frequently if they are likely to incorrectly appear (or not appear) as a result of a search. Similar methods could be incorporated into the static rank (e.g., how many frequent queries contain words found on this page). Others have investigated the effect that PageRank has on the Web at large [9]. They argue that pages with high PageRank are more likely to be found by Web users, thus more likely to be linked to, and thus more likely to maintain a higher PageRank than other pages. The same may occur for the popularity data. If we increase the ranking for popular pages, they are more likely to be clicked on, thus further increasing their popularity. Cho et al. [10] argue that a more appropriate measure of Web page quality would depend on not only the current link structure of the Web, but also on the change in that link structure. The same technique may be applicable to popularity data: the change in popularity of a page may be more informative than the absolute popularity. One interesting related work is that of Ivory and Hearst [19]. They used over 100 page level features, as well as features encompassing the performance and structure of the site. Nevertheless, their work provides insights to additional useful static features that we could incorporate into fRank in the future. Recent work on incorporating novel features into dynamic ranking includes that by Joachims et al. [21], who investigate the use of implicit feedback from users, in the form of which search engine results are clicked on. Craswell et al. [11] present a method for determining the best transformation to apply to query independent features (such as those used in this paper) for the purposes of improving dynamic ranking. Other work, such as Boyan et al. [4] and Bartell et al. [2] apply machine learning for the purposes of improving the overall relevance of a search engine (i.e., the dynamic ranking). They do not apply their techniques to the problem of static ranking. 6.3 Future work There are many ways in which we would like to extend this work. First, fRank uses only a small number of features. We believe we could achieve even more significant results with more features. Many pages are generated dynamically, the contents of which may depend on parameters in the URL, the time of day, the user visiting the site, or other variables. For such pages, it may be useful to apply the techniques found in [26] to form a static approximation for the purposes of extracting features. The experiments in this paper are biased toward pages that have higher than average quality. Also, fRank with all of the features can only be applied to pages that have already been crawled. Thus, fRank is primarily useful for index ordering and improving relevance, not for directing the crawl. We would like to investigate a machine learning approach for crawl prioritization as well. It may be that a combination of methods is best: for example, using PageRank to select the best 5 billion of the 20 billion pages on the Web, then using fRank to order the index and affect search relevancy. Another interesting direction for exploration is to incorporate fRank and page-level features directly into the PageRank computation itself. Work on biasing the PageRank jump vector [16], and transition matrix [29], have demonstrated the feasibility and advantages of such an approach. There is reason to believe that a direct application of [29], using the fRank of a page for its "relevance", could lead to an improved overall static rank. Finally, the popularity data can be used in other interesting ways. The general surfing and searching habits of Web users varies by time of day. We can gain insight into these differences by using the popularity data, divided into segments of the day. When a query is issued, we would then use the popularity data matching the time of query in order to do the ranking of Web pages. We also plan to explore popularity features that use more than just the counts of how often a page was visited. For example, how long users tended to dwell on a page, did they leave the page by clicking a link or by hitting the back button, etc. . Fox et al. did a study that showed that features such as this can be valuable for the purposes of dynamic ranking [14]. Finally, the popularity data could be used as the label rather than as a feature. Using fRank in this way to predict the popularity of a page may useful for the tasks of relevance, efficiency, and crawl priority. There is also significantly more popularity data than human labeled data, potentially enabling more complex machine learning methods, and significantly more features. 7. CONCLUSIONS A good static ranking is an important component for today's search engines and information retrieval systems. We have demonstrated that PageRank does not provide a very good static ranking; there are many simple features that individually out perform PageRank. By combining many static features, fRank achieves a ranking that has a significantly higher pairwise accuracy than PageRank alone. A qualitative evaluation of the top documents shows that fRank is less technology-biased than PageRank; by using popularity data, it is biased toward pages that Web users, rather than Web authors, visit. We have only begun to explore the options, and believe that significant strides can be made in the area of static ranking by further experimentation with additional features, other machine learning techniques, and additional sources of data.
H-45
Query Performance Prediction in Web Search Environments
Current prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist. In this paper, we present three techniques to address these challenges. We focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding. Our evaluation is mainly performed on the GOV2 collection. In addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types. To assist prediction under the mixed-query situation, a novel query classifier is adopted. Results show that our prediction of web query performance is substantially more accurate than the current state-of-the-art prediction techniques. Consequently, our paper provides a practical approach to performance prediction in real-world web settings.
[ "queri perform predict", "web search environ", "web search", "homogen test collect", "gov2 collect", "content-base queri", "content-base and name-page find", "mix-queri situat", "queri classif", "trec document collect", "rank robust techniqu", "name-page find task", "weight inform gain", "wig", "robust score probabilitydens classifi", "kl-diverg", "jensen-shannon diverg" ]
[ "P", "P", "P", "P", "P", "M", "M", "M", "M", "M", "M", "M", "M", "U", "M", "U", "U" ]
Query Performance Prediction in Web Search Environments Yun Zhou and W. Bruce Croft Department of Computer Science University of Massachusetts, Amherst {yzhou, croft}@cs. umass.edu ABSTRACT Current prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist. In this paper, we present three techniques to address these challenges. We focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding. Our evaluation is mainly performed on the GOV2 collection. In addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types. To assist prediction under the mixed-query situation, a novel query classifier is adopted. Results show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques. Consequently, our paper provides a practical approach to performance prediction in realworld web settings. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval -Query formulation General Terms Algorithms, Experimentation, Theory 1. INTRODUCTION Query performance prediction has many applications in a variety of information retrieval (IR) areas such as improving retrieval consistency, query refinement, and distributed IR. The importance of this problem has been recognized by IR researchers and a number of new methods have been proposed for prediction recently [1, 2, 17]. Most work on prediction has focused on the traditional ad-hoc retrieval task where query performance is measured according to topical relevance. These prediction models are evaluated on TREC document collections which typically consist of no more than one million relatively homogenous newswire articles. With the popularity and influence of the Web, prediction techniques that will work well for web-style queries are highly preferable. However, web search environments pose significant challenges to current prediction models that are mainly designed for traditional TREC settings. Here we outline some of these challenges. First, web collections, which are much larger than conventional TREC collections, include a variety of documents that are different in many aspects such as quality and style. Current prediction techniques can be vulnerable to these characteristics of web collections. For example, the reported prediction accuracy of the ranking robustness technique and the clarity technique on the GOV2 collection (a large web collection) is significantly worse compared to the other TREC collections [1]. Similar prediction accuracy on the GOV2 collection using another technique is reported in [2], confirming the difficult of predicting query performance on a large web collection. Furthermore, web search goes beyond the scope of the ad-hoc retrieval task based on topical relevance. For example, the Named-Page (NP) finding task, which is a navigational task, is also popular in web retrieval. Query performance prediction for the NP task is still necessary since NP retrieval performance is far from perfect. In fact, according to the report on the NP task of the 2005 Terabyte Track [3], about 40% of the test queries perform poorly (no correct answer in the first 10 search results) even in the best run from the top group. To our knowledge, little research has explicitly addressed the problem of NP-query performance prediction. Current prediction models devised for content-based queries will be less effective for NP queries considering the fundamental differences between the two. Third, in real-world web search environments, user queries are usually a mixture of different types and prior knowledge about the type of each query is generally unavailable. The mixed-query situation raises new problems for query performance prediction. For instance, we may need to incorporate a query classifier into prediction models. Despite these problems, the ability to handle this situation is a crucial step towards turning query performance prediction from an interesting research topic into a practical tool for web retrieval. In this paper, we present three techniques to address the above challenges that current prediction models face in Web search environments. Our work focuses on query performance prediction for the content-based (ad-hoc) retrieval task and the name-page finding task in the context of web retrieval. Our first technique, called weighted information gain (WIG), makes use of both single term and term proximity features to estimate the quality of top retrieved documents for prediction. We find that WIG offers consistent prediction accuracy across various test collections and query types. Moreover, we demonstrate that good prediction accuracy can be achieved for the mixed-query situation by using WIG with the help of a query type classifier. Query feedback and first rank change, which are our second and third prediction techniques, perform well for content-based queries and NP queries respectively. Our main contributions include: (1) considerably improved prediction accuracy for web content-based queries over several state-of-the-art techniques. (2) new techniques for successfully predicting NP-query performance. (3) a practical and fully automatic solution to predicting mixed-query performance. In addition, one minor contribution is that we find that the robustness score [1], which was originally proposed for performance prediction, is helpful for query classification. 2. RELATED WORK As we mentioned in the introduction, a number of prediction techniques have been proposed recently that focus on contentbased queries in the topical relevance (ad-hoc) task. We know of no published work that addresses other types of queries such as NP queries, let alone a mixture of query types. Next we review some representative models. The major difficulty of performance prediction comes from the fact that many factors have an impact on retrieval performance. Each factor affects performance to a different degree and the overall effect is hard to predict accurately. Therefore, it is not surprising to notice that simple features, such as the frequency of query terms in the collection [4] and the average IDF of query terms [5], do not predict well. In fact, most of the successful techniques are based on measuring some characteristics of the retrieved document set to estimate topic difficulty. For example, the clarity score [6] measures the coherence of a list of documents by the KL-divergence between the query model and the collection model. The robustness score [1] quantifies another property of a ranked list: the robustness of the ranking in the presence of uncertainty. Carmel et al. [2] found that the distance measured by the Jensen-Shannon divergence between the retrieved document set and the collection is significantly correlated to average precision. Vinay et al.[7] proposed four measures to capture the geometry of the top retrieved documents for prediction. The most effective measure is the sensitivity to document perturbation, an idea somewhat similar to the robustness score. Unfortunately, their way of measuring the sensitivity does not perform equally well for short queries and prediction accuracy drops considerably when a state-of-the-art retrieval technique (like Okapi or a language modeling approach) is adopted for retrieval instead of the tf-idf weighting used in their paper [16]. The difficulties of applying these models in web search environments have already been mentioned. In this paper, we mainly adopt the clarity score and the robustness score as our baselines. We experimentally show that the baselines, even after being carefully tuned, are inadequate for the web environment. One of our prediction models, WIG, is related to the Markov random field (MRF) model for information retrieval [8]. The MRF model directly models term dependence and is found be to highly effective across a variety of test collections (particularly web collections) and retrieval tasks. This model is used to estimate the joint probability distribution over documents and queries, an important part of WIG. The superiority of WIG over other prediction techniques based on unigram features, which will be demonstrated later in our paper, coincides with that of MRF for retrieval. In other word, it is interesting to note that term dependence, when being modeled appropriately, can be helpful for both improving and predicting retrieval performance. 3. PREDICTION MODELS 3.1 Weighted Information Gain (WIG) This section introduces a weighted information gain approach that incorporates both single term and proximity features for predicting performance for both content-based and Named-Page (NP) finding queries. Given a set of queries Q={Qs} (s=1,2,. . N) which includes all possible user queries and a set of documents D={Dt} (t=1,2...M), we assume that each query-document pair (Qs,Dt) is manually judged and will be put in a relevance list if Qs is found to be relevant to Dt. The joint probability P(Qs,Dt) over queries Q and documents D denotes the probability that pair (Qs,Dt) will be in the relevance list. Such assumptions are similar to those used in [8]. Assuming that the user issues query Qi ∈Q and the retrieval results in response to Qi is a ranked list L of documents, we calculate the amount of information contained in P(Qs,Dt) with respect to Qi and L by Eq.1 which is a variant of entropy called the weighted entropy[13]. The weights in Eq.1 are solely determined by Qi and L. )1(),(log),(),( , , ∑−= ts tststsLQ DQPDQweightDQH i In this paper, we choose the weights as follows: LindocumentsKtopthecontainsLTwhere otherwise LTDandisifK DQweight K Kt ts )( )2( ,0 )(,/1 ),( ⎩ ⎨ ⎧ ∈= = The cutoff rank K is a parameter in our model that will be discussed later. Accordingly, Eq.1 can be simplified as follows: )3(),(log 1 ),( )( , ∑∈ −= LTD titsLQ Kt i DQP K DQH Unfortunately, weighted entropy ),(, tsLQ DQH i computed by Eq.3, which represents the amount of information about how likely the top ranked documents in L would be relevant to query Qi on average, cannot be compared across different queries, making it inappropriate for directly predicting query performance. To mitigate this problem, we come up with a background distribution P(Qs,C) over Q and D by imagining that every document in D is replaced by the same special document C which represents average language usage. In this paper, C is created by concatenating every document in D. Roughly speaking, C is the collection (the document set) {Dt} without document boundaries. Similarly, weighted entropy ),(, CQH sLQi calculated by Eq.3 represents the amount of information about how likely an average document (represented by the whole collection) would be relevant to query Qi. Now we introduce our performance predictor WIG which is the weighted information gain [13] computed as the difference between ),(, tsLQ DQH i and ),(, CQH sLQi . Specifically, given query Qi, collection C and ranked list L of documents, WIG is calculated as follows: )4( ),( ),( log 1 ),( ),( log),( ),(),(),,( )(, ,, ∑∑ ∈ == −= LTD i ti ts s ts ts tsLQsLQi Kt ii CQP DQP KCQP DQP DQweight DQHCQHLCQWIG WIG computed by Eq.4 measures the change in information about the quality of retrieval (in response to query Qi) from an imaginary state that only an average document is retrieved to a posterior state that the actual search results are observed. We hypothesize that WIG is positively correlated with retrieval effectiveness because high quality retrieval should be much more effective than just returning the average document. The heart of this technique is how to estimate the joint distribution P(Qs,Dt). In the language modeling approach to IR, a variety of models can be applied readily to estimate this distribution. Although most of these models are based on the bagof-words assumption, recent work on modeling term dependence under the language modeling framework have shown consistent and significant improvements in retrieval effectiveness over bagof-words models. Inspired by the success of incorporating term proximity features into language models, we decide to adopt a good dependence model to estimate the probability P(Qs,Dt). The model we chose for this paper is Metzler and Croft``s Markov Random Field (MRF) model, which has already demonstrated superiority over a number of collections and different retrieval tasks [8,9]. According to the MRF model, log P(Qi, Dt) can be written as )5()|(loglog),(log )( 1 ∑∈ +−= iQF tti DPZDQP ξ ξ ξλ where Z1 is a constant that ensures that P(Qi, Dt) sums up to 1. F(Qi) consists of a set of features expanded from the original query Qi . For example, assuming that query Qi is talented student program, F(Qi) includes features like program and talented student. We consider two kinds of features: single term features T and proximity features P. Proximity features include exact phrase (#1) and unordered window (#uwN) features as described in [8]. Note that F(Qi) is the union of T(Qi) and P(Qi). For more details on F(Qi) such as how to expand the original query Qi to F(Qi), we refer the reader to [8] and [9]. P(ξ|Dt) denotes the probability that feature ξ will occur in Dt. More details on P(ξ|Dt) will be provided later in this section. The choice of λξ is somewhat different from that used in [8] since λξ plays a dual role in our model. The first role, which is the same as in [8], is to weight between single term and proximity features. The other role, which is specific to our prediction task, is to normalize the size of F(Qi). We found that the following weight strategy for λξ satisfies the above two roles and generalizes well on a variety of collections and query types. ) 6( )(, |)(| 1 )(, |)(| ⎪ ⎪ ⎩ ⎪ ⎪ ⎨ ⎧ ∈ − ∈ = i i T i i T QP QP QT QT ξ λ ξ λ λξ where |T(Qi)| and |P(Qi)| denote the number of single term and proximity features in F(Qi) respectively. The reason for choosing the square root function in the denominator of λξ is to penalize a feature set of large size appropriately, making WIG more comparable across queries of various lengths. λT is a fixed parameter and set to 0.8 according to [8] throughout this paper. Similarly, log P(Qi,C) can be written as: )7()|(loglog),(log )( 2 ∑∈ +−= iQF i CPZCQP ξ ξ ξλ When constant Z1 and Z2 are dropped, WIG computed in Eq.4 can be rewritten as follows by plugging in Eq.5 and Eq.7 : )8( )|( )|( log 1 ),,( )( )( ∑ ∑∈ ∈ = LTD QF t i Kt i CP DP K LCQWIG ξ ξ ξ ξ λ One of the advantages of WIG over other techniques is that it can handle well both content-based and NP queries. Based on the type (or the predicted type) of Qi, the calculation of WIG in Eq. 8 differs in two aspects: (1) how to estimate P(ξ|Dt) and P(ξ|C), and (2) how to choose K. For content-based queries, P(ξ|C) is estimated by the relative frequency of feature ξ in collection C as a whole. The estimation of P(ξ|Dt) is the same as in [8]. Namely, we estimate P(ξ|Dt) by the relative frequency of feature ξ in Dt linearly smoothed with collection frequency P(ξ|C). K in Eq.8 is treated as a free parameter. Note that K is the only free parameter in the computation of WIG for content-based queries because all parameters involved in P(ξ|Dt) are assumed to be fixed by taking the suggested values in [8]. Regarding NP queries, we make use of document structure to estimate P(ξ|Dt) and P(ξ|C) by the so-called mixture of language models proposed in [10] and incorporated into the MRF model for Named-Page finding retrieval in [9]. The basic idea is that a document (collection) is divided into several fields such as the title field, the main-body field and the heading field. P(ξ|Dt) and P(ξ|C) are estimated by a linear combination of the language models from each field. Due to space constraints, we refer the reader to [9] for details. We adopt the exact same set of parameters as used in [9] for estimation. With regard to K in Eq.8, we set K to 1 because the Named-Page finding task heavily focuses on the first ranked document. Consequently, there are no free parameters in the computation of WIG for NP queries. 3.2 Query Feedback In this section, we introduce another technique called query feedback (QF) for prediction. Suppose that a user issues query Q to a retrieval system and a ranked list L of documents is returned. We view the retrieval system as a noisy channel. Specifically, we assume that the output of the channel is L and the input is Q. After going through the channel, Q becomes corrupted and is transformed to ranked list L. By thinking about the retrieval process this way, the problem of predicting retrieval effectiveness turns to the task of evaluating the quality of the channel. In other words, prediction becomes finding a way to measure the degree of corruption that arises when Q is transformed to L. As directly computing the degree of the corruption is difficult, we tackle this problem by approximation. Our main idea is that we measure to what extent information on Q can be recovered from L on the assumption that only L is observed. Specifically, we design a decoder that can accurately translate L back into new query Q'' and the similarity S between the original query Q and the new query Q'' is adopted as a performance predictor. This is a sketch of how the QF technique predicts query performance. Before filling in more details, we briefly discuss why this method would work. There is a relation between the similarity S defined above and retrieval performance. On the one hand, if the retrieval has strayed from the original sense of the query Q, the new query Q'' extracted from ranked list L in response to Q would be very different from the original query Q. On the other hand, a query distilled from a ranked list containing many relevant documents is likely to be similar to the original query. Further examples in support of the relation will be provided later. Next we detail how to build the decoder and how to measure the similarity S. In essence, the goal of the decoder is to compress ranked list L into a few informative terms that should represent the content of the top ranked documents in L. Our approach to this goal is to represent ranked list L by a language model (distribution over terms). Then terms are ranked by their contribution to the language model``s KL (Kullback-Leibler) divergence from the background collection model. Top ranked terms will be chosen to form the new query Q''. This approach is similar to that used in Section 4.1 of [11]. Specifically, we take three steps to compress ranked list L into query Q'' without referring to the original query. 1. We adopt the ranked list language model [14], to estimate a language model based on ranked list L. The model can be written as: )9()|()|()|( ∑∈ = LD LDPDwPLwP where w is any term, D is a document. P(D|L) is estimated by a linearly decreasing function of the rank of document D. 2. Each term in P(w|L) is ranked by the following KL-divergence contribution: )10( )|( )|( log)|( CwP LwP LwP where P(w|C) is the collection model estimated by the relative frequency of term w in collection C as a whole. 3. The top N ranked terms by Eq.10 form a weighted query Q''={(wi,ti)} i=1,N. where wi denotes the i-th ranked term and weight ti is the KL-divergence contribution of wi in Eq. 10. Term cruise ship vessel sea passenger KL contribution 0.050 0.040 0.012 0.010 0.009 Table 1: top 5 terms compressed from the ranked list in response to query Cruise ship damage sea life Two representative examples, one for a poorly performing query Cruise ship damage sea life (TREC topic 719; average precision: 0.08) and the other for a high performing query prostate cancer treatments( TREC topic 710; average precision: 0.49), are shown in Table 1 and 2 respectively. These examples indicate how the similarity between the original and the new query correlates with retrieval performance. The parameter N in step 3 is set to 20 empirically and choosing a larger value of N is unnecessary since the weights after the top 20 are usually too small to make any difference. Term prostate cancer treatment men therapy KL contribution 0.177 0.140 0.028 0.025 0.020 Table 2: top 5 terms compressed from the ranked list in response to query prostate cancer treatments To measure the similarity between original query Q and new query Q'', we first use Q'' to do retrieval on the same collection. A variant of the query likelihood model [15] is adopted for retrieval. Namely, documents are ranked by: )11()|()|'( '),( ∑∈ = Qtw t i ii i DwPDQP where wi is a term in Q'' and ti is the associated weight. D is a document. Let L'' denote the new ranked list returned from the above retrieval. The similarity is measured by the overlap of documents in L and L''. Specifically, the percentage of documents in the top K documents of L that are also present in the top K documents in L''. the cutoff K is treated as a free parameter. We summarize here how the QF technique predicts performance given a query Q and the associated ranked list L. We first obtain a weighted query Q'' compressed from L by the above three steps. Then we use Q'' to perform retrieval and the new ranked list is L''. The overlap of documents in L and L'' is used for prediction. 3.3 First Rank Change (FRC) In this section, we propose a method called the first rank change (FRC) for performance prediction for NP queries. This method is derived from the ranking robustness technique [1] that is mainly designed for content-based queries. When directly applied to NP queries, the robustness technique will be less effective because it takes the top ranked documents as a whole into account while NP queries usually have only one single relevant document. Instead, our technique focuses on the first rank document while the main idea of the robustness method remains. Specifically, the pseudocode for computing FRC is shown in figure 1. Input: (1) ranked list L={Di} where i=1,100. Di denotes the i-th ranked document. (2) query Q 1 initialize: (1) set the number of trials J=100000 (2) counter c=0; 2 for i=1 to J 3 Perturb every document in L, let the outcome be a set F={Di''} where Di'' denotes the perturbed version of Di. 4 Do retrieval with query Q on set F 5 c=c+1 if and only if D1'' is ranked first in step 4 6 end of for 7 return the ratio c/J Figure 1: pseudo-code for computing FRC FRC approximates the probability that the first ranked document in the original list L will remain ranked first even after the documents are perturbed. The higher the probability is, the more confidence we have in the first ranked document. On the other hand, in the extreme case of a random ranking, the probability would be as low as 0.5. We expect that FRC has a positive association with NP query performance. We adopt [1] to implement the document perturbation step (step 4 in Fig.1) using Poisson distributions. For more details, we refer the reader to [1]. 4. EVALUATION We now present the results of predicting query performance by our models. Three state-of-the-art techniques are adopted as our baselines. We evaluate our techniques across a variety of Web retrieval settings. As mentioned before, we consider two types of queries, that is, content-based (CB) queries and Named-Page(NP) finding queries. First, suppose that the query types are known. We investigate the correlation between the predicted retrieval performance and the actual performance for both types of queries separately. Results show that our methods yield considerable improvements over the baselines. We then consider a more challenging scenario where no prior information on query types is available. Two sub-cases are considered. In the first one, there exists only one type of query but the actual type is unknown. We assume a mixture of the two query types in the second case. We demonstrate that our models achieve good accuracy under this demanding scenario, making prediction practical in a real-world Web search environment. 4.1 Experimental Setup Our evaluation focuses on the GOV2 collection which contains about 25 million documents crawled from web sites in the . gov domain during 2004 [3]. We create two kinds of data set for CB queries and NP queries respectively. For the CB type, we use the ad-hoc topics of the Terabyte Tracks of 2004, 2005 and 2006 and name them TB04-adhoc, TB05-adhoc and TB06-adhoc respectively. In addition, we also use the ad-hoc topics of the 2004 Robust Track (RT04) to test the adaptability of our techniques to a non-Web environment. For NP queries, we use the Named-Page finding topics of the Terabyte Tracks of 2005 and 2006 and we name them TB05-NP and TB06-NP respectively. All queries used in our experiments are titles of TREC topics as we center on web retrieval. Table 3 summarizes the above data sets. Name Collection Topic Number Query Type TB04-adhoc GOV2 701-750 CB TB05-adhoc GOV2 751-800 CB TB06-adhoc GOV2 801-850 CB RT04 Disk 4+5 (minus CR) 301-450;601700 CB TB05-NP GOV2 NP601-NP872 NP TB06-NP GOV2 NP901-NP1081 NP Table 3: Summary of test collections and topics Retrieval performance of individual content-based and NP queries is measured by the average precision and reciprocal rank of the first correct answer respectively. We make use of the Markov Random field model for both ad-hoc and Named-Page finding retrieval. We adopt the same setting of retrieval parameters used in [8,9]. The Indri search engine [12] is used for all of our experiments. Though not reported here, we also tried the query likelihood model for ad-hoc retrieval and found that the results change little because of the very high correlation between the query performances obtained by the two retrieval models (0.96 measured by Pearson``s coefficient). 4.2 Known Query Types Suppose that query types are known. We treat each type of query separately and measure the correlation with average precision (or the reciprocal rank in the case of NP queries). We adopt the Pearson``s correlation test which reflects the degree of linear relationship between the predicted and the actual retrieval performance. 4.2.1 Content-based Queries Methods Clarity Robust JSD WIG QF WIG +QF TB04+0 5 adhoc 0.333 0.317 0.362 0.574 0.480 0.637 TB06 adhoc 0.076 0.294 N/A 0.464 0.422 0.511 Table 4: Pearson``s correlation coefficients for correlation with average precision on the Terabyte Tracks (ad-hoc) for clarity score, robustness score, the JSD-based method(we directly cites the score reported in [2]), WIG, query feedback(QF) and a linear combination of WIG and QF. Bold cases mean the results are statistically significant at the 0.01 level. Table 4 shows the correlation with average precision on two data sets: one is a combination of TB04-adhoc and TB05-adhoc(100 topics in total) and the other is TB06-adhoc (50 topics). The reason that we put TB04-adhoc and TB05-adhoc together is to make our results comparable to [2]. Our baselines are the clarity score (clarity) [6],the robustness score (robust)[1] and the JSDbased method (JSD) [2]. For the clarity and robustness score, we have tried different parameter settings and report the highest correlation coefficients we have found. We directly cite the result of the JSD-based method reported in [2]. The table also shows the results for the Weighted Information Gain (WIG) method and the Query Feedback (QF) method for predicting content-based queries. As we described in the previous section, both WIG and QF have one free parameter to set, that is, the cutoff rank K. We train the parameter on one dataset and test on the other. When combining WIG and QF, a simple linear combination is used and the combination weight is learned from the training data set. From these results, we can see that our methods are considerably more accurate compared to the baselines. We also observe that further improvements are obtained from the combination of WIG and QF, suggesting that they measure different properties of the retrieval process that relate to performance. We discover that our methods generalize well on TB06-adhoc while the correlation for the clarity score with retrieval performance on this data set is considerably worse. Further investigation shows that the mean average precision of TB06-adhoc is 0.342 and is about 10% better than that of the first data set. While the other three methods typically consider the top 100 or less documents given a ranked list, the clarity method usually needs the top 500 or more documents to adequately measure the coherence of a ranked list. Higher mean average precision makes ranked lists retrieved by different queries more similar in terms of coherence at the level of top 500 documents. We believe that this is the main reason for the low accuracy of the clarity score on the second data set. Though this paper focuses on a Web search environment, it is desirable that our techniques will work consistently well in other situations. To this end, we examine the effectiveness of our techniques on the Robust 2004 Track. For our methods, we evenly divide all of the test queries into five groups and perform five-fold cross validation. Each time we use one group for training and the remaining four groups for testing. We make use of all of the queries for our two baselines, that is, the clarity score and the robustness score. The parameters for our baselines are the same as those used in [1]. The results shown in Table 5 demonstrate that the prediction accuracy of our methods is on a par with that of the two strong baselines. Clarity Robust WIG QF 0.464 0.539 0.468 0.464 Table 5: Comparison of Pearson``s correlation coefficients on the 2004 Robust Track for clarity score, robustness score, WIG and query feedback (QF). Bold cases mean the results are statistically significant at the 0.01 level. Furthermore, we examine the prediction sensitivity of our methods to the cutoff rank K. With respect to WIG, it is quite robust to K on the Terabyte Tracks (2004-2006) while it prefers a small value of K like 5 on the 2004 Robust Track. In other words, a small value of K is a nearly-optimal choice for both kinds of tracks. Considering the fact that all other parameters involved in WIG are fixed and consequently the same for the two cases, this means WIG can achieve nearly-optimal prediction accuracy in two considerably different situations with exactly the same parameter setting. Regarding QF, it prefers a larger value of K such as 100 on the Terabyte Tracks and a smaller value of K such as 25 on the 2004 Robust Track. 4.2.2 NP Queries We adopt WIG and first rank change (FRC) for predicting NPquery performance. We also try a linear combination of the two as in the previous section. The combination weight is obtained from the other data set. We use the correlation with the reciprocal ranks measured by the Pearson``s correlation test to evaluate prediction quality. The results are presented in Table 6. Again, our baselines are the clarity score and the robustness score. To make a fair comparison, we tune the clarity score in different ways. We found that using the first ranked document to build the query model yields the best prediction accuracy. We also attempted to utilize document structure by using the mixture of language models mentioned in section 3.1. Little improvement was obtained. The correlation coefficients for the clarity score reported in Table 6 are the best we have found. As we can see, our methods considerably outperform the clarity score technique on both of the runs. This confirms our intuition that the use of a coherence-based measure like the clarity score is inappropriate for NP queries. Methods Clarity Robust. WIG FRC WIG+FRC TB05-NP 0.150 -0.370 0.458 0.440 0.525 TB06-NP 0.112 -0.160 0.478 0.386 0.515 Table 6: Pearson``s correlation coefficients for correlation with reciprocal ranks on the Terabyte Tracks (NP) for clarity score, robustness score, WIG, the first rank change (FRC) and a linear combination of WIG and FRC. Bold cases mean the results are statistically significant at the 0.01 level. Regarding the robustness score, we also tune the parameters and report the best we have found. We observe an interesting and surprising negative correlation with reciprocal ranks. We explain this finding briefly. A high robustness score means that a number of top ranked documents in the original ranked list are still highly ranked after perturbing the documents. The existence of such documents is a good sign of high performance for content-based queries as these queries usually contain a number of relevant documents [1]. However, with regard to NP queries, one fundamental difference is that there is only one relevant document for each query. The existence of such documents can confuse the ranking function and lead to low retrieval performance. Although the negative correlation with retrieval performance exists, the strength of the correlation is weaker and less consistent compared to our methods as shown in Table 6. Based on the above analysis, we can see that current prediction techniques like clarity score and robustness score that are mainly designed for content-based queries face significant challenges and are inadequate to deal with NP queries. Our two techniques proposed for NP queries consistently demonstrate good prediction accuracy, displaying initial success in solving the problem of predicting performance for NP queries. Another point we want to stress is that the WIG method works well for both types of queries, a desirable property that most prediction techniques lack. 4.3 Unknown Query Types In this section, we run two kinds of experiments without access to query type labels. First, we assume that only one type of query exists but the type is unknown. Second, we experiment on a mixture of content-based and NP queries. The following two subsections will report results for the two conditions respectively. 4.3.1 Only One Type exists We assume that all queries are of the same type, that is, they are either NP queries or content-based queries. We choose WIG to deal with this case because it shows good prediction accuracy for both types of queries in the previous section. We consider two cases: (1) CB: all 150 title queries from the ad-hoc task of the Terabyte Tracks 2004-2006 (2)NP: all 433 NP queries from the named page finding task of the Terabyte Tracks 2005 and 2006. We take a simple strategy by labeling all of the queries in each case as the same type (either NP or CB) regardless of their actual type. The computation of WIG will be based on the labeled query type instead of the actual type. There are four possibilities with respect to the relation between the actual type and the labeled type. The correlation with retrieval performance under the four possibilities is presented in Table 7. For example, the value 0.445 at the intersection between the second row and the third column shows the Pearson``s correlation coefficient for correlation with average precision when the content-based queries are incorrectly labeled as the NP type. Based on these results, we recommend treating all queries as the NP type when only one query type exists and accurate query classification is not feasible, considering the risk that a large loss of accuracy will occur if NP queries are incorrectly labeled as content-based queries. These results also demonstrate the strong adaptability of WIG to different query types. CB (labeled) NP (labeled) CB (actual) 0.536 0.445 NP (actual) 0.174 0.467 Table 7: Comparison of Pearson``s correlation coefficients for correlation with retrieval performance under four possibilities on the Terabyte Tracks (NP). Bold cases mean the results are statistically significant at the 0.01 level. 4.3.2 A mixture of contented-based and NP queries A mixture of the two types of queries is a more realistic situation that a Web search engine will meet. We evaluate prediction accuracy by how accurately poorly-performing queries can be identified by the prediction method assuming that actual query types are unknown (but we can predict query types). This is a challenging task because both the predicted and actual performance for one type of query can be incomparable to that for the other type. Next we discuss how to implement our evaluation. We create a query pool which consists of all of the 150 ad-hoc title queries from Terabyte Track 2004-2006 and all of the 433 NP queries from Terabyte Track 2005&2006. We divide the queries in the pool into classes: good (better than 50% of the queries of the same type in terms of retrieval performance) and bad (otherwise). According to these standards, a NP query with the reciprocal rank above 0.2 or a content-based query with the average precision above 0.315 will be considered as good. Then, each time we randomly select one query Q from the pool with probability p that Q is contented-based. The remaining queries are used as training data. We first decide the type of query Q according to a query classifier. Namely, the query classifier tells us whether query Q is NP or content-based. Based on the predicted query type and the score computed for query Q by a prediction technique, a binary decision is made about whether query Q is good or bad by comparing to the score threshold of the predicted query type obtained from the training data. Prediction accuracy is measured by the accuracy of the binary decision. In our implementation, we repeatedly take a test query from the query pool and prediction accuracy is computed as the percentage of correct decisions, that is, a good(bad) query is predicted to be good (bad). It is obvious that random guessing will lead to 50% accuracy. Let us take the WIG method for example to illustrate the process. Two WIG thresholds (one for NP queries and the other for content-based queries) are trained by maximizing the prediction accuracy on the training data. When a test query is labeled as the NP (CB) type by the query type classifier, it will be predicted to be good if and only if the WIG score for this query is above the NP (CB) threshold. Similar procedures will be taken for other prediction techniques. Now we briefly introduce the automatic query type classifier used in this paper. We find that the robustness score, though originally proposed for performance prediction, is a good indicator of query types. We find that on average content-based queries have a much higher robustness score than NP queries. For example, Figure 2 shows the distributions of robustness scores for NP and content-based queries. According to this finding, the robustness score classifier will attach a NP (CB) label to the query if the robustness score for the query is below (above) a threshold trained from the training data. 0 0.5 1 1.5 2 2.5 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 NP Content-based Figure 2: Distribution of robustness scores for NP and CB queries. The NP queries are the 252 NP topics from the 2005 Terabyte Track. The content-based queries are the 150 ad-hoc title from the Terabyte Tracks 2004-2006. The probability distributions are estimated by the Kernel density estimation method. Strategies Robust WIG-1 WIG-2 WIG-3 Optimal p=0.6 0.565 0.624 0.665 0.684 0.701 P=0.4 0.567 0.633 0.654 0.673 0.696 Table 8: Comparison of prediction accuracy for five strategies in the mixed-query situation. Two ways to sample a query from the pool: (1) the sampled query is content-based with the probability p=0.6. (that is, the query is NP with probability 0.4 ) (2) set the probability p=0.4. We consider five strategies in our experiments. In the first strategy (denoted by robust), we use the robustness score for query performance prediction with the help of a perfect query classifier that always correctly map a query into one of the two categories (that is, NP or CB). This strategy represents the level of prediction accuracy that current prediction techniques can achieve in an ideal condition that query types are known. In the next following three strategies, the WIG method is adopted for performance prediction. The difference among the three is that three different query classifiers are used for each strategy: (1) the classifier always classifies a query into the NP type. (2) the Robustness Score ProbabilityDensity classifier is the robust score classifier mentioned above. (3) the classifier is a perfect one. These three strategies are denoted by WIG-1, WIG-2 and WIG-3 respectively. The reason we are interested in WIG-1 is based on the results from section 4.3.1. In the last strategy (denoted by Optimal) which serves as an upper bound on how well we can do so far, we fully make use of our prediction techniques for each query type assuming a perfect query classifier is available. Specifically, we linearly combine WIG and QF for content-based queries and WIG and FRC for NP queries. The results for the five strategies are shown in Table 8. For each strategy, we try two ways to sample a query from the pool: (1) the sampled query is CB with probability p=0.6. (the query is NP with probability 0.4) (2) set the probability p=0.4. From Table 8 We can see that in terms of prediction accuracy WIG-2 (the WIG method with the automatic query classifier) is not only better than the first two cases, but also is close to WIG-3 where a perfect classifier is assumed. Some further improvements over WIG-3 are observed when combined with other prediction techniques. The merit of WIG-2 is that it provides a practical solution to automatically identifying poorly performing queries in a Web search environment with mixed query types, which poses considerable obstacles to traditional prediction techniques. 5. CONCLUSIONS AND FUTURE WORK To our knowledge, our paper is the first to thoroughly explore prediction of query performance in web search environments. We demonstrated that our models resulted in higher prediction accuracy than previously published techniques not specially devised for web search scenarios. In this paper, we focus on two types of queries in web search: content-based and Named-Page (NP) finding queries, corresponding to the ad-hoc retrieval task and the Named-Page finding task respectively. For both types of web queries, our prediction models were shown to be substantially more accurate than the current state-of-the-art techniques. Furthermore, we considered a more realistic case that no prior information on query types is available. We demonstrated that the WIG method is particularly suitable for this situation. Considering the adaptability of WIG to a range of collections and query types, one of our future plans is to apply this method to predict user preference of search results on realistic data collected from a commercial search engine. Other than accuracy, another major issue that prediction techniques have to deal with in a Web environment is efficiency. Fortunately, since the WIG score is computed just over the terms and the phrases that appear in the query, this calculation can be made very efficient with the support of index. On the other hand, the computation of QF and FRC is relatively less efficient since QF needs to retrieve the whole collection twice and FRC needs to repeatedly rank the perturbed documents. How to improve the efficiency of QF and FRC is our future work. In addition, the prediction techniques proposed in this paper have the potential of improving retrieval performance by combining with other IR techniques. For example, our techniques can be incorporated to popular query modification techniques such as query expansion and query relaxation. Guided by performance prediction, we can make a better decision on when to or how to modify queries to enhance retrieval effectiveness. We would like to carry out research in this direction in the future. 6. ACKNOWLEGMENTS This work was supported in part by the Center for Intelligent Information Retrieval, in part by the Defense Advanced Research Projects Agency (DARPA) under contract number HR0011-06-C0023, and in part by an award from Google. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect those of the sponsor. In addition, we thank Donald Metzler for his valuable comments on this work. 7. REFERENCES [1] Y. Zhou ,W. B. Croft ,Ranking Robustness: A Novel Framework to Predict Query Performance, in Proceedings of CIKM 2006. [2] D.Carmel, E.Yom-Tov, A.Darlow,D.Pelleg, What Makes a Query Difficult? , in Proceedings of SIGIR 2006. [3] C.L.A. Clarke, F. Scholer, I.Soboroff, The TREC 2005 Terabyte Track, In the Online Proceedings of 2005 TREC. [4] B. He and I.Ounis. Inferring query performance using preretrieval predictors. In proceedings of the SPIRE 2004. [5] S. Tomlinson. Robust, Web and Terabyte Retrieval with Hummingbird SearchServer at TREC 2004. In the Online Proceedings of 2004 TREC. [6] S. Cronen-Townsend, Y. Zhou, W. B. Croft, Predicting Query Performance, in Proceedings of SIGIR 2002. [7] V.Vinay, I.J.Cox, N.Mill-Frayling,K.Wood, On Ranking the Effectiveness of Searcher, in Proceedings of SIGIR 2006. [8] D.Metzler, W.B.Croft, A Markov Random Filed Model for Term Dependencies, in Proceedings of SIGIR 2005. [9] D.Metzler, T.Strohman,Y.Zhou,W.B.Croft, Indri at TREC 2005: Terabyte Track, In the Online Proceedings of 2004 TREC. [10] P. Ogilvie and J. Callan, Combining document representations for known-item search, in Proceedings of SIGIR 2003. [11] A.Berger, J.Lafferty, Information retrieval as statistical translation, in Proceedings of SIGIR 1999. [12] Indri search engine : http://www.lemurproject.org/indri/ [13] I.J. Taneja: On Generalized Information Measures and Their Applications, Advances in Electronics and Electron Physics, Academic Press (USA), 76, 1989, 327-413. [14] S.Cronen-Townsend, Y. Zhou and Croft, W. B. , ``A Framework for Selective Query Expansion,'' in Proceedings of CIKM 2004. [15] F.Song, W.B.Croft, A general language model for information retrieval, in Proceedings of SIGIR 1999. [16] Personal email contact with Vishwa Vinay and our own experiments [17] E.Yom-Tov, S.Fine, D.Carmel, A.Darlow, Learning to Estimate Query Difficulty Including Applications to Missing Content Detection and Distributed Information retrieval, in Proceedings of SIGIR 2005
Query Performance Prediction in Web Search Environments ABSTRACT Current prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist. In this paper, we present three techniques to address these challenges. We focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding. Our evaluation is mainly performed on the GOV2 collection. In addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types. To assist prediction under the mixed-query situation, a novel query classifier is adopted. Results show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques. Consequently, our paper provides a practical approach to performance prediction in realworld web settings. 1. INTRODUCTION Query performance prediction has many applications in a variety of information retrieval (IR) areas such as improving retrieval consistency, query refinement, and distributed IR. The importance of this problem has been recognized by IR researchers and a number of new methods have been proposed for prediction recently [1, 2, 17]. Most work on prediction has focused on the traditional "ad-hoc" retrieval task where query performance is measured according to topical relevance. These prediction models are evaluated on TREC document collections which typically consist of no more than one million relatively homogenous newswire articles. With the popularity and influence of the Web, prediction techniques that will work well for web-style queries are highly preferable. However, web search environments pose significant challenges to current prediction models that are mainly designed for traditional TREC settings. Here we outline some of these challenges. First, web collections, which are much larger than conventional TREC collections, include a variety of documents that are different in many aspects such as quality and style. Current prediction techniques can be vulnerable to these characteristics of web collections. For example, the reported prediction accuracy of the ranking robustness technique and the clarity technique on the GOV2 collection (a large web collection) is significantly worse compared to the other TREC collections [1]. Similar prediction accuracy on the GOV2 collection using another technique is reported in [2], confirming the difficult of predicting query performance on a large web collection. Furthermore, web search goes beyond the scope of the ad-hoc retrieval task based on topical relevance. For example, the Named-Page (NP) finding task, which is a navigational task, is also popular in web retrieval. Query performance prediction for the NP task is still necessary since NP retrieval performance is far from perfect. In fact, according to the report on the NP task of the 2005 Terabyte Track [3], about 40% of the test queries perform poorly (no correct answer in the first 10 search results) even in the best run from the top group. To our knowledge, little research has explicitly addressed the problem of NP-query performance prediction. Current prediction models devised for content-based queries will be less effective for NP queries considering the fundamental differences between the two. Third, in real-world web search environments, user queries are usually a mixture of different types and prior knowledge about the type of each query is generally unavailable. The mixed-query situation raises new problems for query performance prediction. For instance, we may need to incorporate a query classifier into prediction models. Despite these problems, the ability to handle this situation is a crucial step towards turning query performance prediction from an interesting research topic into a practical tool for web retrieval. In this paper, we present three techniques to address the above challenges that current prediction models face in Web search environments. Our work focuses on query performance prediction for the content-based (ad-hoc) retrieval task and the name-page finding task in the context of web retrieval. Our first technique, called weighted information gain (WIG), makes use of both single term and term proximity features to estimate the quality of top retrieved documents for prediction. We find that WIG offers consistent prediction accuracy across various test collections and query types. Moreover, we demonstrate that good prediction accuracy can be achieved for the mixed-query situation by using WIG with the help of a query type classifier. Query feedback and first rank change, which are our second and third prediction techniques, perform well for content-based queries and NP queries respectively. Our main contributions include: (1) considerably improved prediction accuracy for web content-based queries over several state-of-the-art techniques. (2) new techniques for successfully predicting NP-query performance. (3) a practical and fully automatic solution to predicting mixed-query performance. In addition, one minor contribution is that we find that the robustness score [1], which was originally proposed for performance prediction, is helpful for query classification. 2. RELATED WORK As we mentioned in the introduction, a number of prediction techniques have been proposed recently that focus on contentbased queries in the topical relevance (ad-hoc) task. We know of no published work that addresses other types of queries such as NP queries, let alone a mixture of query types. Next we review some representative models. The major difficulty of performance prediction comes from the fact that many factors have an impact on retrieval performance. Each factor affects performance to a different degree and the overall effect is hard to predict accurately. Therefore, it is not surprising to notice that simple features, such as the frequency of query terms in the collection [4] and the average IDF of query terms [5], do not predict well. In fact, most of the successful techniques are based on measuring some characteristics of the retrieved document set to estimate topic difficulty. For example, the clarity score [6] measures the coherence of a list of documents by the KL-divergence between the query model and the collection model. The robustness score [1] quantifies another property of a ranked list: the robustness of the ranking in the presence of uncertainty. Carmel et al. [2] found that the distance measured by the Jensen-Shannon divergence between the retrieved document set and the collection is significantly correlated to average precision. Vinay et al. [7] proposed four measures to capture the geometry of the top retrieved documents for prediction. The most effective measure is the sensitivity to document perturbation, an idea somewhat similar to the robustness score. Unfortunately, their way of measuring the sensitivity does not perform equally well for short queries and prediction accuracy drops considerably when a state-of-the-art retrieval technique (like Okapi or a language modeling approach) is adopted for retrieval instead of the tf-idf weighting used in their paper [16]. The difficulties of applying these models in web search environments have already been mentioned. In this paper, we mainly adopt the clarity score and the robustness score as our baselines. We experimentally show that the baselines, even after being carefully tuned, are inadequate for the web environment. One of our prediction models, WIG, is related to the Markov random field (MRF) model for information retrieval [8]. The MRF model directly models term dependence and is found be to highly effective across a variety of test collections (particularly web collections) and retrieval tasks. This model is used to estimate the joint probability distribution over documents and queries, an important part of WIG. The superiority of WIG over other prediction techniques based on unigram features, which will be demonstrated later in our paper, coincides with that of MRF for retrieval. In other word, it is interesting to note that term dependence, when being modeled appropriately, can be helpful for both improving and predicting retrieval performance. 3. PREDICTION MODELS 3.1 Weighted Information Gain (WIG) This section introduces a weighted information gain approach that incorporates both single term and proximity features for predicting performance for both content-based and Named-Page (NP) finding queries. Given a set of queries Q = {Qs} (s = 1,2,. . N) which includes all possible user queries and a set of documents D = {Dt} (t = 1,2...M), we assume that each query-document pair (Qs, Dt) is manually judged and will be put in a relevance list if Qs is found to be relevant to Dt. The joint probability P (Qs, Dt) over queries Q and documents D denotes the probability that pair (Qs, Dt) will be in the relevance list. Such assumptions are similar to those used in [8]. Assuming that the user issues query Qi ∈ Q and the retrieval results in response to Qi is a ranked list L of documents, we calculate the amount of information contained in P (Qs, Dt) with respect to Qi and L by Eq .1 which is a variant of entropy called the weighted entropy [13]. The weights in Eq .1 are solely determined by Qi and L. Unfortunately, weighted entropyHQi, L (Q s, D t) computed by Eq .3, which represents the amount of information about how likely the top ranked documents in L would be relevant to query Qi on average, cannot be compared across different queries, making it inappropriate for directly predicting query performance. To mitigate this problem, we come up with a background distribution P (Qs, C) over Q and D by imagining that every document in D is replaced by the same special document C which represents average language usage. In this paper, C is created by concatenating every document in D. Roughly speaking, C is the collection (the document set) {Dt} without document boundaries. Similarly, weighted entropy HQi, L (Qs, C) calculated by Eq .3 represents the amount of information about how likely an average document (represented by the whole collection) would be relevant to query Qi. Now we introduce our performance predictor WIG which is the weighted information gain [13] computed as the difference between HQi, L (Qs, Dt) and H Qi, L (Qs, C). Specifically, given query Qi, collection C and ranked list L of documents, WIG is calculated as follows: WIG computed by Eq .4 measures the change in information about the quality of retrieval (in response to query Qi) from an imaginary state that only an average document is retrieved to a posterior state that the actual search results are observed. We hypothesize that WIG is positively correlated with retrieval effectiveness because high quality retrieval should be much more effective than just returning the average document. The heart of this technique is how to estimate the joint distribution P (Qs, Dt). In the language modeling approach to IR, a variety of models can be applied readily to estimate this distribution. Although most of these models are based on the bagof-words assumption, recent work on modeling term dependence under the language modeling framework have shown consistent and significant improvements in retrieval effectiveness over bagof-words models. Inspired by the success of incorporating term proximity features into language models, we decide to adopt a good dependence model to estimate the probability P (Qs, Dt). The model we chose for this paper is Metzler and Croft's Markov Random Field (MRF) model, which has already demonstrated superiority over a number of collections and different retrieval tasks [8,9]. According to the MRF model, log P (Qi, Dt) can be written as where Z1 is a constant that ensures that P (Qi, Dt) sums up to 1. F (Qi) consists of a set of features expanded from the original query Qi. For example, assuming that query Qi is "talented student program", F (Qi) includes features like "program" and "talented student". We consider two kinds of features: single term features T and proximity features P. Proximity features include exact phrase (#1) and unordered window (#uwN) features as described in [8]. Note that F (Qi) is the union of T (Qi) and P (Qi). For more details on F (Qi) such as how to expand the original query Qi to F (Qi), we refer the reader to [8] and [9]. P (4 | Dt) denotes the probability that feature 4 will occur in Dt. More details on P (4 | Dt) will be provided later in this section. The choice of),4 is somewhat different from that used in [8] since),4 plays a dual role in our model. The first role, which is the same as in [8], is to weight between single term and proximity features. The other role, which is specific to our prediction task, is to normalize the size of F (Qi). We found that the following weight strategy for),4 satisfies the above two roles and generalizes well on a variety of collections and query types. where | T (Qi) | and | P (Qi) | denote the number of single term and proximity features in F (Qi) respectively. The reason for choosing the square root function in the denominator of),4 is to penalize a feature set of large size appropriately, making WIG more comparable across queries of various lengths.) , T is a fixed parameter and set to 0.8 according to [8] throughout this paper. Similarly, log P (Qi, C) can be written as: When constant Z1 and Z2 are dropped, WIG computed in Eq .4 can be rewritten as follows by plugging in Eq .5 and Eq .7: One of the advantages of WIG over other techniques is that it can handle well both content-based and NP queries. Based on the type (or the predicted type) of Qi, the calculation of WIG in Eq. 8 differs in two aspects: (1) how to estimate P (4 | Dt) and P (4 | C), and (2) how to choose K. For content-based queries, P (4 | C) is estimated by the relative frequency of feature 4 in collection C as a whole. The estimation of P (4 | Dt) is the same as in [8]. Namely, we estimate P (4 | Dt) by the relative frequency of feature 4 in Dt linearly smoothed with collection frequency P (4 | C). K in Eq .8 is treated as a free parameter. Note that K is the only free parameter in the computation of WIG for content-based queries because all parameters involved in P (4 | Dt) are assumed to be fixed by taking the suggested values in [8]. Regarding NP queries, we make use of document structure to estimate P (4 | Dt) and P (4 | C) by the so-called mixture of language models proposed in [10] and incorporated into the MRF model for Named-Page finding retrieval in [9]. The basic idea is that a document (collection) is divided into several fields such as the title field, the main-body field and the heading field. P (4 | Dt) and P (4 | C) are estimated by a linear combination of the language models from each field. Due to space constraints, we refer the reader to [9] for details. We adopt the exact same set of parameters as used in [9] for estimation. With regard to K in Eq .8, we set K to 1 because the Named-Page finding task heavily focuses on the first ranked document. Consequently, there are no free parameters in the computation of WIG for NP queries. 3.2 Query Feedback In this section, we introduce another technique called query feedback (QF) for prediction. Suppose that a user issues query Q to a retrieval system and a ranked list L of documents is returned. We view the retrieval system as a noisy channel. Specifically, we assume that the output of the channel is L and the input is Q. After going through the channel, Q becomes corrupted and is transformed to ranked list L. By thinking about the retrieval process this way, the problem of predicting retrieval effectiveness turns to the task of evaluating the quality of the channel. In other words, prediction becomes finding a way to measure the degree of corruption that arises when Q is transformed to L. As directly computing the degree of the corruption is difficult, we tackle this problem by approximation. Our main idea is that we measure to what extent information on Q can be recovered from L on the assumption that only L is observed. Specifically, we design a decoder that can accurately translate L back into new query Q' and the similarity S between the original query Q and the new query Q' is adopted as a performance predictor. This is a sketch of how the QF technique predicts query performance. Before filling in more details, we briefly discuss why this method would work. There is a relation between the similarity S defined above and retrieval performance. On the one hand, if the retrieval has strayed from the original sense of the query Q, the new query Q' extracted from ranked list L in response to Q would be very different from the original query Q. On the other hand, a query distilled from a ranked list containing many relevant documents is likely to be similar to the original query. Further examples in support of the relation will be provided later. Next we detail how to build the decoder and how to measure the similarity S. In essence, the goal of the decoder is to compress ranked list L into a few informative terms that should represent the content of the top ranked documents in L. Our approach to this goal is to represent ranked list L by a language model (distribution over terms). Then terms are ranked by their contribution to the language model's KL (Kullback-Leibler) divergence from the background collection model. Top ranked terms will be chosen to form the new query Q'. This approach is similar to that used in Section 4.1 of [11]. Specifically, we take three steps to compress ranked list L into query Q' without referring to the original query. 1. We adopt the ranked list language model [14], to estimate a language model based on ranked list L. The model can be written as: where w is any term, D is a document. P (D | L) is estimated by a linearly decreasing function of the rank of document D. 2. Each term in P (w | L) is ranked by the following KL-divergence contribution: where P (w | C) is the collection model estimated by the relative frequency of term w in collection C as a whole. 3. The top N ranked terms by Eq .10 form a weighted query Q' ={(wi, ti)} i = 1, N. where wi denotes the i-th ranked term and weight ti is the KL-divergence contribution of wi in Eq. 10. Table 1: top 5 terms compressed from the ranked list in response to query "Cruise ship damage sea life" Two representative examples, one for a poorly performing query "Cruise ship damage sea life" (TREC topic 719; average precision: 0.08) and the other for a high performing query "prostate cancer treatments" (TREC topic 710; average precision: 0.49), are shown in Table 1 and 2 respectively. These examples indicate how the similarity between the original and the new query correlates with retrieval performance. The parameter N in step 3 is set to 20 empirically and choosing a larger value of N is unnecessary since the weights after the top 20 are usually too small to make any difference. Table 2: top 5 terms compressed from the ranked list in response to query "prostate cancer treatments" To measure the similarity between original query Q and new query Q', we first use Q' to do retrieval on the same collection. A variant of the query likelihood model [15] is adopted for retrieval. Namely, documents are ranked by: where wi is a term in Q' and ti is the associated weight. D is a document. Let L' denote the new ranked list returned from the above retrieval. The similarity is measured by the overlap of documents in L and L'. Specifically, the percentage of documents in the top K documents of L that are also present in the top K documents in L'. the cutoff K is treated as a free parameter. We summarize here how the QF technique predicts performance given a query Q and the associated ranked list L. We first obtain a weighted query Q' compressed from L by the above three steps. Then we use Q' to perform retrieval and the new ranked list is L'. The overlap of documents in L and L' is used for prediction. 3.3 First Rank Change (FRC) In this section, we propose a method called the first rank change (FRC) for performance prediction for NP queries. This method is derived from the ranking robustness technique [1] that is mainly designed for content-based queries. When directly applied to NP queries, the robustness technique will be less effective because it takes the top ranked documents as a whole into account while NP queries usually have only one single relevant document. Instead, our technique focuses on the first rank document while the main idea of the robustness method remains. Specifically, the pseudocode for computing FRC is shown in figure 1. Input: (1) ranked list L = {Di} where i = 1,100. Di denotes the i-th ranked document. (2) query Q Figure 1: pseudo-code for computing FRC FRC approximates the probability that the first ranked document in the original list L will remain ranked first even after the documents are perturbed. The higher the probability is, the more confidence we have in the first ranked document. On the other hand, in the extreme case of a random ranking, the probability would be as low as 0.5. We expect that FRC has a positive association with NP query performance. We adopt [1] to implement the document perturbation step (step 4 in Fig. 1) using Poisson distributions. For more details, we refer the reader to [1]. 4. EVALUATION We now present the results of predicting query performance by our models. Three state-of-the-art techniques are adopted as our baselines. We evaluate our techniques across a variety of Web retrieval settings. As mentioned before, we consider two types of queries, that is, content-based (CB) queries and Named-Page (NP) finding queries. First, suppose that the query types are known. We investigate the correlation between the predicted retrieval performance and the actual performance for both types of queries separately. Results show that our methods yield considerable improvements over the baselines. We then consider a more challenging scenario where no prior information on query types is available. Two sub-cases are considered. In the first one, there exists only one type of query but the actual type is unknown. We assume a mixture of the two query types in the second case. We demonstrate that our models achieve good accuracy under this demanding scenario, making prediction practical in a real-world Web search environment. 4.1 Experimental Setup Our evaluation focuses on the GOV2 collection which contains about 25 million documents crawled from web sites in the. gov domain during 2004 [3]. We create two kinds of data set for CB queries and NP queries respectively. For the CB type, we use the ad-hoc topics of the Terabyte Tracks of 2004, 2005 and 2006 and name them TB04-adhoc, TB05-adhoc and TB06-adhoc respectively. In addition, we also use the ad-hoc topics of the 2004 Robust Track (RT04) to test the adaptability of our techniques to a non-Web environment. For NP queries, we use the Named-Page finding topics of the Terabyte Tracks of 2005 and 2006 and we name them TB05-NP and TB06-NP respectively. All queries used in our experiments are titles of TREC topics as we center on web retrieval. Table 3 summarizes the above data sets. Table 3: Summary of test collections and topics Retrieval performance of individual content-based and NP queries is measured by the average precision and reciprocal rank of the first correct answer respectively. We make use of the Markov Random field model for both ad-hoc and Named-Page finding retrieval. We adopt the same setting of retrieval parameters used in [8,9]. The Indri search engine [12] is used for all of our experiments. Though not reported here, we also tried the query likelihood model for ad-hoc retrieval and found that the results change little because of the very high correlation between the query performances obtained by the two retrieval models (0.96 measured by Pearson's coefficient). 4.2 Known Query Types Suppose that query types are known. We treat each type of query separately and measure the correlation with average precision (or the reciprocal rank in the case of NP queries). We adopt the Pearson's correlation test which reflects the degree of linear relationship between the predicted and the actual retrieval performance. 4.2.1 Content-based Queries Table 4: Pearson's correlation coefficients for correlation with average precision on the Terabyte Tracks (ad-hoc) for clarity score, robustness score, the JSD-based method (we directly cites the score reported in [2]), WIG, query feedback (QF) and a linear combination of WIG and QF. Bold cases mean the results are statistically significant at the 0.01 level. Table 4 shows the correlation with average precision on two data sets: one is a combination of TB04-adhoc and TB05-adhoc (100 topics in total) and the other is TB06-adhoc (50 topics). The reason that we put TB04-adhoc and TB05-adhoc together is to make our results comparable to [2]. Our baselines are the clarity score (clarity) [6], the robustness score (robust) [1] and the JSDbased method (JSD) [2]. For the clarity and robustness score, we have tried different parameter settings and report the highest correlation coefficients we have found. We directly cite the result of the JSD-based method reported in [2]. The table also shows the results for the Weighted Information Gain (WIG) method and the Query Feedback (QF) method for predicting content-based queries. As we described in the previous section, both WIG and QF have one free parameter to set, that is, the cutoff rank K. We train the parameter on one dataset and test on the other. When combining WIG and QF, a simple linear combination is used and the combination weight is learned from the training data set. From these results, we can see that our methods are considerably more accurate compared to the baselines. We also observe that further improvements are obtained from the combination of WIG and QF, suggesting that they measure different properties of the retrieval process that relate to performance. We discover that our methods generalize well on TB06-adhoc while the correlation for the clarity score with retrieval performance on this data set is considerably worse. Further investigation shows that the mean average precision of TB06-adhoc is 0.342 and is about 10% better than that of the first data set. While the other three methods typically consider the top 100 or less documents given a ranked list, the clarity method usually needs the top 500 or more documents to adequately measure the coherence of a ranked list. Higher mean average precision makes ranked lists retrieved by different queries more similar in terms of coherence at the level of top 500 documents. We believe that this is the main reason for the low accuracy of the clarity score on the second data set. Though this paper focuses on a Web search environment, it is desirable that our techniques will work consistently well in other situations. To this end, we examine the effectiveness of our techniques on the Robust 2004 Track. For our methods, we evenly divide all of the test queries into five groups and perform five-fold cross validation. Each time we use one group for training and the remaining four groups for testing. We make use of all of the queries for our two baselines, that is, the clarity score and the robustness score. The parameters for our baselines are the same as those used in [1]. The results shown in Table 5 demonstrate that the prediction accuracy of our methods is on a par with that of the two strong baselines. Table 5: Comparison of Pearson's correlation coefficients on the 2004 Robust Track for clarity score, robustness score, WIG and query feedback (QF). Bold cases mean the results are statistically significant at the 0.01 level. Furthermore, we examine the prediction sensitivity of our methods to the cutoff rank K. With respect to WIG, it is quite robust to K on the Terabyte Tracks (2004-2006) while it prefers a small value of K like 5 on the 2004 Robust Track. In other words, a small value of K is a nearly-optimal choice for both kinds of tracks. Considering the fact that all other parameters involved in WIG are fixed and consequently the same for the two cases, this means WIG can achieve nearly-optimal prediction accuracy in two considerably different situations with exactly the same parameter setting. Regarding QF, it prefers a larger value of K such as 100 on the Terabyte Tracks and a smaller value of K such as 25 on the 2004 Robust Track. 4.2.2 NP Queries We adopt WIG and first rank change (FRC) for predicting NPquery performance. We also try a linear combination of the two as in the previous section. The combination weight is obtained from the other data set. We use the correlation with the reciprocal ranks measured by the Pearson's correlation test to evaluate prediction quality. The results are presented in Table 6. Again, our baselines are the clarity score and the robustness score. To make a fair comparison, we tune the clarity score in different ways. We found that using the first ranked document to build the query model yields the best prediction accuracy. We also attempted to utilize document structure by using the mixture of language models mentioned in section 3.1. Little improvement was obtained. The correlation coefficients for the clarity score reported in Table 6 are the best we have found. As we can see, our methods considerably outperform the clarity score technique on both of the runs. This confirms our intuition that the use of a coherence-based measure like the clarity score is inappropriate for NP queries. Table 6: Pearson's correlation coefficients for correlation with reciprocal ranks on the Terabyte Tracks (NP) for clarity score, robustness score, WIG, the first rank change (FRC) and a linear combination of WIG and FRC. Bold cases mean the results are statistically significant at the 0.01 level. Regarding the robustness score, we also tune the parameters and report the best we have found. We observe an interesting and surprising negative correlation with reciprocal ranks. We explain this finding briefly. A high robustness score means that a number of top ranked documents in the original ranked list are still highly ranked after perturbing the documents. The existence of such documents is a good sign of high performance for content-based queries as these queries usually contain a number of relevant documents [1]. However, with regard to NP queries, one fundamental difference is that there is only one relevant document for each query. The existence of such documents can confuse the ranking function and lead to low retrieval performance. Although the negative correlation with retrieval performance exists, the strength of the correlation is weaker and less consistent compared to our methods as shown in Table 6. Based on the above analysis, we can see that current prediction techniques like clarity score and robustness score that are mainly designed for content-based queries face significant challenges and are inadequate to deal with NP queries. Our two techniques proposed for NP queries consistently demonstrate good prediction accuracy, displaying initial success in solving the problem of predicting performance for NP queries. Another point we want to stress is that the WIG method works well for both types of queries, a desirable property that most prediction techniques lack. 4.3 Unknown Query Types In this section, we run two kinds of experiments without access to query type labels. First, we assume that only one type of query exists but the type is unknown. Second, we experiment on a mixture of content-based and NP queries. The following two subsections will report results for the two conditions respectively. 4.3.1 Only One Type exists We assume that all queries are of the same type, that is, they are either NP queries or content-based queries. We choose WIG to deal with this case because it shows good prediction accuracy for both types of queries in the previous section. We consider two cases: (1) CB: all 150 title queries from the ad-hoc task of the Terabyte Tracks 2004-2006 (2) NP: all 433 NP queries from the named page finding task of the Terabyte Tracks 2005 and 2006. We take a simple strategy by labeling all of the queries in each case as the same type (either NP or CB) regardless of their actual type. The computation of WIG will be based on the labeled query type instead of the actual type. There are four possibilities with respect to the relation between the actual type and the labeled type. The correlation with retrieval performance under the four possibilities is presented in Table 7. For example, the value 0.445 at the intersection between the second row and the third column shows the Pearson's correlation coefficient for correlation with average precision when the content-based queries are incorrectly labeled as the NP type. Based on these results, we recommend treating all queries as the NP type when only one query type exists and accurate query classification is not feasible, considering the risk that a large loss of accuracy will occur if NP queries are incorrectly labeled as content-based queries. These results also demonstrate the strong adaptability of WIG to different query types. Table 7: Comparison of Pearson's correlation coefficients for correlation with retrieval performance under four possibilities on the Terabyte Tracks (NP). Bold cases mean the results are statistically significant at the 0.01 level. 4.3.2 A mixture of contented-based and NP queries A mixture of the two types of queries is a more realistic situation that a Web search engine will meet. We evaluate prediction accuracy by how accurately poorly-performing queries can be identified by the prediction method assuming that actual query types are unknown (but we can predict query types). This is a challenging task because both the predicted and actual performance for one type of query can be incomparable to that for the other type. Next we discuss how to implement our evaluation. We create a query pool which consists of all of the 150 ad-hoc title queries from Terabyte Track 2004-2006 and all of the 433 NP queries from Terabyte Track 2005 & 2006. We divide the queries in the pool into classes: "good" (better than 50% of the queries of the same type in terms of retrieval performance) and "bad" (otherwise). According to these standards, a NP query with the reciprocal rank above 0.2 or a content-based query with the average precision above 0.315 will be considered as good. Then, each time we randomly select one query Q from the pool with probability p that Q is contented-based. The remaining queries are used as training data. We first decide the type of query Q according to a query classifier. Namely, the query classifier tells us whether query Q is NP or content-based. Based on the predicted query type and the score computed for query Q by a prediction technique, a binary decision is made about whether query Q is good or bad by comparing to the score threshold of the predicted query type obtained from the training data. Prediction accuracy is measured by the accuracy of the binary decision. In our implementation, we repeatedly take a test query from the query pool and prediction accuracy is computed as the percentage of correct decisions, that is, a good (bad) query is predicted to be good (bad). It is obvious that random guessing will lead to 50% accuracy. Let us take the WIG method for example to illustrate the process. Two WIG thresholds (one for NP queries and the other for content-based queries) are trained by maximizing the prediction accuracy on the training data. When a test query is labeled as the NP (CB) type by the query type classifier, it will be predicted to be good if and only if the WIG score for this query is above the NP (CB) threshold. Similar procedures will be taken for other prediction techniques. Now we briefly introduce the automatic query type classifier used in this paper. We find that the robustness score, though originally proposed for performance prediction, is a good indicator of query types. We find that on average content-based queries have a much higher robustness score than NP queries. For example, Figure 2 shows the distributions of robustness scores for NP and content-based queries. According to this finding, the robustness score classifier will attach a NP (CB) label to the query if the robustness score for the query is below (above) a threshold trained from the training data. Figure 2: Distribution of robustness scores for NP and CB queries. The NP queries are the 252 NP topics from the 2005 Terabyte Track. The content-based queries are the 150 ad-hoc title from the Terabyte Tracks 2004-2006. The probability distributions are estimated by the Kernel density estimation method. Table 8: Comparison of prediction accuracy for five strategies in the mixed-query situation. Two ways to sample a query from the pool: (1) the sampled query is content-based with the probability p = 0.6. (that is, the query is NP with probability 0.4) (2) set the probability p = 0.4. We consider five strategies in our experiments. In the first strategy (denoted by "robust"), we use the robustness score for query performance prediction with the help of a perfect query classifier that always correctly map a query into one of the two categories (that is, NP or CB). This strategy represents the level of prediction accuracy that current prediction techniques can achieve in an ideal condition that query types are known. In the next following three strategies, the WIG method is adopted for performance prediction. The difference among the three is that three different query classifiers are used for each strategy: (1) the classifier always classifies a query into the NP type. (2) the classifier is the robust score classifier mentioned above. (3) the classifier is a perfect one. These three strategies are denoted by WIG-1, WIG-2 and WIG-3 respectively. The reason we are interested in WIG-1 is based on the results from section 4.3.1. In the last strategy (denoted by "Optimal") which serves as an upper bound on how well we can do so far, we fully make use of our prediction techniques for each query type assuming a perfect query classifier is available. Specifically, we linearly combine WIG and QF for content-based queries and WIG and FRC for NP queries. The results for the five strategies are shown in Table 8. For each strategy, we try two ways to sample a query from the pool: (1) the sampled query is CB with probability p = 0.6. (the query is NP with probability 0.4) (2) set the probability p = 0.4. From Table 8 We can see that in terms of prediction accuracy WIG-2 (the WIG method with the automatic query classifier) is not only better than the first two cases, but also is close to WIG-3 where a perfect classifier is assumed. Some further improvements over WIG-3 are observed when combined with other prediction techniques. The merit of WIG-2 is that it provides a practical solution to automatically identifying poorly performing queries in a Web search environment with mixed query types, which poses considerable obstacles to traditional prediction techniques. 5. CONCLUSIONS AND FUTURE WORK To our knowledge, our paper is the first to thoroughly explore prediction of query performance in web search environments. We demonstrated that our models resulted in higher prediction accuracy than previously published techniques not specially devised for web search scenarios. In this paper, we focus on two types of queries in web search: content-based and Named-Page (NP) finding queries, corresponding to the ad-hoc retrieval task and the Named-Page finding task respectively. For both types of web queries, our prediction models were shown to be substantially more accurate than the current state-of-the-art techniques. Furthermore, we considered a more realistic case that no prior information on query types is available. We demonstrated that the WIG method is particularly suitable for this situation. Considering the adaptability of WIG to a range of collections and query types, one of our future plans is to apply this method to predict user preference of search results on realistic data collected from a commercial search engine. Other than accuracy, another major issue that prediction techniques have to deal with in a Web environment is efficiency. Fortunately, since the WIG score is computed just over the terms and the phrases that appear in the query, this calculation can be made very efficient with the support of index. On the other hand, the computation of QF and FRC is relatively less efficient since QF needs to retrieve the whole collection twice and FRC needs to repeatedly rank the perturbed documents. How to improve the efficiency of QF and FRC is our future work. In addition, the prediction techniques proposed in this paper have the potential of improving retrieval performance by combining with other IR techniques. For example, our techniques can be incorporated to popular query modification techniques such as query expansion and query relaxation. Guided by performance prediction, we can make a better decision on when to or how to modify queries to enhance retrieval effectiveness. We would like to carry out research in this direction in the future.
Query Performance Prediction in Web Search Environments ABSTRACT Current prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist. In this paper, we present three techniques to address these challenges. We focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding. Our evaluation is mainly performed on the GOV2 collection. In addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types. To assist prediction under the mixed-query situation, a novel query classifier is adopted. Results show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques. Consequently, our paper provides a practical approach to performance prediction in realworld web settings. 1. INTRODUCTION Query performance prediction has many applications in a variety of information retrieval (IR) areas such as improving retrieval consistency, query refinement, and distributed IR. The importance of this problem has been recognized by IR researchers and a number of new methods have been proposed for prediction recently [1, 2, 17]. Most work on prediction has focused on the traditional "ad-hoc" retrieval task where query performance is measured according to topical relevance. These prediction models are evaluated on TREC document collections which typically consist of no more than one million relatively homogenous newswire articles. With the popularity and influence of the Web, prediction techniques that will work well for web-style queries are highly preferable. However, web search environments pose significant challenges to current prediction models that are mainly designed for traditional TREC settings. Here we outline some of these challenges. First, web collections, which are much larger than conventional TREC collections, include a variety of documents that are different in many aspects such as quality and style. Current prediction techniques can be vulnerable to these characteristics of web collections. For example, the reported prediction accuracy of the ranking robustness technique and the clarity technique on the GOV2 collection (a large web collection) is significantly worse compared to the other TREC collections [1]. Similar prediction accuracy on the GOV2 collection using another technique is reported in [2], confirming the difficult of predicting query performance on a large web collection. Furthermore, web search goes beyond the scope of the ad-hoc retrieval task based on topical relevance. For example, the Named-Page (NP) finding task, which is a navigational task, is also popular in web retrieval. Query performance prediction for the NP task is still necessary since NP retrieval performance is far from perfect. In fact, according to the report on the NP task of the 2005 Terabyte Track [3], about 40% of the test queries perform poorly (no correct answer in the first 10 search results) even in the best run from the top group. To our knowledge, little research has explicitly addressed the problem of NP-query performance prediction. Current prediction models devised for content-based queries will be less effective for NP queries considering the fundamental differences between the two. Third, in real-world web search environments, user queries are usually a mixture of different types and prior knowledge about the type of each query is generally unavailable. The mixed-query situation raises new problems for query performance prediction. For instance, we may need to incorporate a query classifier into prediction models. Despite these problems, the ability to handle this situation is a crucial step towards turning query performance prediction from an interesting research topic into a practical tool for web retrieval. In this paper, we present three techniques to address the above challenges that current prediction models face in Web search environments. Our work focuses on query performance prediction for the content-based (ad-hoc) retrieval task and the name-page finding task in the context of web retrieval. Our first technique, called weighted information gain (WIG), makes use of both single term and term proximity features to estimate the quality of top retrieved documents for prediction. We find that WIG offers consistent prediction accuracy across various test collections and query types. Moreover, we demonstrate that good prediction accuracy can be achieved for the mixed-query situation by using WIG with the help of a query type classifier. Query feedback and first rank change, which are our second and third prediction techniques, perform well for content-based queries and NP queries respectively. Our main contributions include: (1) considerably improved prediction accuracy for web content-based queries over several state-of-the-art techniques. (2) new techniques for successfully predicting NP-query performance. (3) a practical and fully automatic solution to predicting mixed-query performance. In addition, one minor contribution is that we find that the robustness score [1], which was originally proposed for performance prediction, is helpful for query classification. 2. RELATED WORK As we mentioned in the introduction, a number of prediction techniques have been proposed recently that focus on contentbased queries in the topical relevance (ad-hoc) task. We know of no published work that addresses other types of queries such as NP queries, let alone a mixture of query types. Next we review some representative models. The major difficulty of performance prediction comes from the fact that many factors have an impact on retrieval performance. Each factor affects performance to a different degree and the overall effect is hard to predict accurately. Therefore, it is not surprising to notice that simple features, such as the frequency of query terms in the collection [4] and the average IDF of query terms [5], do not predict well. In fact, most of the successful techniques are based on measuring some characteristics of the retrieved document set to estimate topic difficulty. For example, the clarity score [6] measures the coherence of a list of documents by the KL-divergence between the query model and the collection model. The robustness score [1] quantifies another property of a ranked list: the robustness of the ranking in the presence of uncertainty. Carmel et al. [2] found that the distance measured by the Jensen-Shannon divergence between the retrieved document set and the collection is significantly correlated to average precision. Vinay et al. [7] proposed four measures to capture the geometry of the top retrieved documents for prediction. The most effective measure is the sensitivity to document perturbation, an idea somewhat similar to the robustness score. Unfortunately, their way of measuring the sensitivity does not perform equally well for short queries and prediction accuracy drops considerably when a state-of-the-art retrieval technique (like Okapi or a language modeling approach) is adopted for retrieval instead of the tf-idf weighting used in their paper [16]. The difficulties of applying these models in web search environments have already been mentioned. In this paper, we mainly adopt the clarity score and the robustness score as our baselines. We experimentally show that the baselines, even after being carefully tuned, are inadequate for the web environment. One of our prediction models, WIG, is related to the Markov random field (MRF) model for information retrieval [8]. The MRF model directly models term dependence and is found be to highly effective across a variety of test collections (particularly web collections) and retrieval tasks. This model is used to estimate the joint probability distribution over documents and queries, an important part of WIG. The superiority of WIG over other prediction techniques based on unigram features, which will be demonstrated later in our paper, coincides with that of MRF for retrieval. In other word, it is interesting to note that term dependence, when being modeled appropriately, can be helpful for both improving and predicting retrieval performance. 3. PREDICTION MODELS 3.1 Weighted Information Gain (WIG) 3.2 Query Feedback 3.3 First Rank Change (FRC) 4. EVALUATION 4.1 Experimental Setup 4.2 Known Query Types 4.2.1 Content-based Queries 4.2.2 NP Queries 4.3 Unknown Query Types 4.3.1 Only One Type exists 4.3.2 A mixture of contented-based and NP queries 5. CONCLUSIONS AND FUTURE WORK To our knowledge, our paper is the first to thoroughly explore prediction of query performance in web search environments. We demonstrated that our models resulted in higher prediction accuracy than previously published techniques not specially devised for web search scenarios. In this paper, we focus on two types of queries in web search: content-based and Named-Page (NP) finding queries, corresponding to the ad-hoc retrieval task and the Named-Page finding task respectively. For both types of web queries, our prediction models were shown to be substantially more accurate than the current state-of-the-art techniques. Furthermore, we considered a more realistic case that no prior information on query types is available. We demonstrated that the WIG method is particularly suitable for this situation. Considering the adaptability of WIG to a range of collections and query types, one of our future plans is to apply this method to predict user preference of search results on realistic data collected from a commercial search engine. Other than accuracy, another major issue that prediction techniques have to deal with in a Web environment is efficiency. Fortunately, since the WIG score is computed just over the terms and the phrases that appear in the query, this calculation can be made very efficient with the support of index. On the other hand, the computation of QF and FRC is relatively less efficient since QF needs to retrieve the whole collection twice and FRC needs to repeatedly rank the perturbed documents. How to improve the efficiency of QF and FRC is our future work. In addition, the prediction techniques proposed in this paper have the potential of improving retrieval performance by combining with other IR techniques. For example, our techniques can be incorporated to popular query modification techniques such as query expansion and query relaxation. Guided by performance prediction, we can make a better decision on when to or how to modify queries to enhance retrieval effectiveness. We would like to carry out research in this direction in the future.
Query Performance Prediction in Web Search Environments ABSTRACT Current prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist. In this paper, we present three techniques to address these challenges. We focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding. Our evaluation is mainly performed on the GOV2 collection. In addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types. To assist prediction under the mixed-query situation, a novel query classifier is adopted. Results show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques. Consequently, our paper provides a practical approach to performance prediction in realworld web settings. 1. INTRODUCTION Query performance prediction has many applications in a variety of information retrieval (IR) areas such as improving retrieval consistency, query refinement, and distributed IR. The importance of this problem has been recognized by IR researchers and a number of new methods have been proposed for prediction recently [1, 2, 17]. Most work on prediction has focused on the traditional "ad-hoc" retrieval task where query performance is measured according to topical relevance. These prediction models are evaluated on TREC document collections which typically consist of no more than one million relatively homogenous newswire articles. With the popularity and influence of the Web, prediction techniques that will work well for web-style queries are highly preferable. However, web search environments pose significant challenges to current prediction models that are mainly designed for traditional TREC settings. Here we outline some of these challenges. Current prediction techniques can be vulnerable to these characteristics of web collections. For example, the reported prediction accuracy of the ranking robustness technique and the clarity technique on the GOV2 collection (a large web collection) is significantly worse compared to the other TREC collections [1]. Similar prediction accuracy on the GOV2 collection using another technique is reported in [2], confirming the difficult of predicting query performance on a large web collection. Furthermore, web search goes beyond the scope of the ad-hoc retrieval task based on topical relevance. For example, the Named-Page (NP) finding task, which is a navigational task, is also popular in web retrieval. Query performance prediction for the NP task is still necessary since NP retrieval performance is far from perfect. To our knowledge, little research has explicitly addressed the problem of NP-query performance prediction. Current prediction models devised for content-based queries will be less effective for NP queries considering the fundamental differences between the two. Third, in real-world web search environments, user queries are usually a mixture of different types and prior knowledge about the type of each query is generally unavailable. The mixed-query situation raises new problems for query performance prediction. For instance, we may need to incorporate a query classifier into prediction models. Despite these problems, the ability to handle this situation is a crucial step towards turning query performance prediction from an interesting research topic into a practical tool for web retrieval. In this paper, we present three techniques to address the above challenges that current prediction models face in Web search environments. Our work focuses on query performance prediction for the content-based (ad-hoc) retrieval task and the name-page finding task in the context of web retrieval. retrieved documents for prediction. We find that WIG offers consistent prediction accuracy across various test collections and query types. Moreover, we demonstrate that good prediction accuracy can be achieved for the mixed-query situation by using WIG with the help of a query type classifier. Query feedback and first rank change, which are our second and third prediction techniques, perform well for content-based queries and NP queries respectively. Our main contributions include: (1) considerably improved prediction accuracy for web content-based queries over several state-of-the-art techniques. (2) new techniques for successfully predicting NP-query performance. (3) a practical and fully automatic solution to predicting mixed-query performance. In addition, one minor contribution is that we find that the robustness score [1], which was originally proposed for performance prediction, is helpful for query classification. 2. RELATED WORK As we mentioned in the introduction, a number of prediction techniques have been proposed recently that focus on contentbased queries in the topical relevance (ad-hoc) task. We know of no published work that addresses other types of queries such as NP queries, let alone a mixture of query types. Next we review some representative models. The major difficulty of performance prediction comes from the fact that many factors have an impact on retrieval performance. Each factor affects performance to a different degree and the overall effect is hard to predict accurately. Therefore, it is not surprising to notice that simple features, such as the frequency of query terms in the collection [4] and the average IDF of query terms [5], do not predict well. In fact, most of the successful techniques are based on measuring some characteristics of the retrieved document set to estimate topic difficulty. For example, the clarity score [6] measures the coherence of a list of documents by the KL-divergence between the query model and the collection model. Vinay et al. [7] proposed four measures to capture the geometry of the top retrieved documents for prediction. The most effective measure is the sensitivity to document perturbation, an idea somewhat similar to the robustness score. The difficulties of applying these models in web search environments have already been mentioned. In this paper, we mainly adopt the clarity score and the robustness score as our baselines. One of our prediction models, WIG, is related to the Markov random field (MRF) model for information retrieval [8]. The MRF model directly models term dependence and is found be to highly effective across a variety of test collections (particularly web collections) and retrieval tasks. This model is used to estimate the joint probability distribution over documents and queries, an important part of WIG. The superiority of WIG over other prediction techniques based on unigram features, which will be demonstrated later in our paper, coincides with that of MRF for retrieval. In other word, it is interesting to note that term dependence, when being modeled appropriately, can be helpful for both improving and predicting retrieval performance. 5. CONCLUSIONS AND FUTURE WORK To our knowledge, our paper is the first to thoroughly explore prediction of query performance in web search environments. We demonstrated that our models resulted in higher prediction accuracy than previously published techniques not specially devised for web search scenarios. In this paper, we focus on two types of queries in web search: content-based and Named-Page (NP) finding queries, corresponding to the ad-hoc retrieval task and the Named-Page finding task respectively. For both types of web queries, our prediction models were shown to be substantially more accurate than the current state-of-the-art techniques. Furthermore, we considered a more realistic case that no prior information on query types is available. We demonstrated that the WIG method is particularly suitable for this situation. Other than accuracy, another major issue that prediction techniques have to deal with in a Web environment is efficiency. How to improve the efficiency of QF and FRC is our future work. In addition, the prediction techniques proposed in this paper have the potential of improving retrieval performance by combining with other IR techniques. For example, our techniques can be incorporated to popular query modification techniques such as query expansion and query relaxation. Guided by performance prediction, we can make a better decision on when to or how to modify queries to enhance retrieval effectiveness.
H-53
Context Sensitive Stemming for Web Search
Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms. Although it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation. In this paper, we propose a context sensitive stemming method that addresses these two issues. Two unique properties make our approach feasible for Web Search. First, based on statistical language modeling, we perform context sensitive analysis on the query side. We accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine. This dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time. Second, our approach performs a context sensitive document matching for those expanded variants. This conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision. Using word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic.
[ "stem", "stem", "web search", "languag model", "context sensit document match", "lovin stemmer", "porter stemmer", "candid gener", "queri segment", "head word detect", "context sensit queri stem", "unigram languag model", "bigram languag model" ]
[ "P", "P", "P", "P", "P", "U", "U", "U", "M", "M", "R", "M", "M" ]
Context Sensitive Stemming for Web Search Fuchun Peng Nawaaz Ahmed Xin Li Yumao Lu Yahoo! Inc. 701 First Avenue Sunnyvale, California 94089 {fuchun, nawaaz, xinli, yumaol}@yahoo-inc.com ABSTRACT Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms. Although it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation. In this paper, we propose a context sensitive stemming method that addresses these two issues. Two unique properties make our approach feasible for Web Search. First, based on statistical language modeling, we perform context sensitive analysis on the query side. We accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine. This dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time. Second, our approach performs a context sensitive document matching for those expanded variants. This conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision. Using word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic. Categories and Subject Descriptors H.3.3 [Information Systems]: Information Storage and Retrieval-Query formulation General Terms Algorithms, Experimentation 1. INTRODUCTION Web search has now become a major tool in our daily lives for information seeking. One of the important issues in Web search is that user queries are often not best formulated to get optimal results. For example, running shoe is a query that occurs frequently in query logs. However, the query running shoes is much more likely to give better search results than the original query because documents matching the intent of this query usually contain the words running shoes. Correctly formulating a query requires the user to accurately predict which word form is used in the documents that best satisfy his or her information needs. This is difficult even for experienced users, and especially difficult for non-native speakers. One traditional solution is to use stemming [16, 18], the process of transforming inflected or derived words to their root form so that a search term will match and retrieve documents containing all forms of the term. Thus, the word run will match running, ran, runs, and shoe will match shoes and shoeing. Stemming can be done either on the terms in a document during indexing (and applying the same transformation to the query terms during query processing) or by expanding the query with the variants during query processing. Stemming during indexing allows very little flexibility during query processing, while stemming by query expansion allows handling each query differently, and hence is preferred. Although traditional stemming increases recall by matching word variants [13], it can reduce precision by retrieving too many documents that have been incorrectly matched. When examining the results of applying stemming to a large number of queries, one usually finds that nearly equal numbers of queries are helped and hurt by the technique [6]. In addition, it reduces system performance because the search engine has to match all the word variants. As we will show in the experiments, this is true even if we simplify stemming to pluralization handling, which is the process of converting a word from its plural to singular form, or vice versa. Thus, one needs to be very cautious when using stemming in Web search engines. One problem of traditional stemming is its blind transformation of all query terms, that is, it always performs the same transformation for the same query word without considering the context of the word. For example, the word book has four forms book, books, booking, booked, and store has four forms store, stores, storing, stored. For the query book store, expanding both words to all of their variants significantly increases computation cost and hurts precision, since not all of the variants are useful for this query. Transforming book store to match book stores is fine, but matching book storing or booking store is not. A weighting method that gives variant words smaller weights alleviates the problems to a certain extent if the weights accurately reflect the importance of the variant in this particular query. However uniform weighting is not going to work and a query dependent weighting is still a challenging unsolved problem [20]. A second problem of traditional stemming is its blind matching of all occurrences in documents. For the query book store, a transformation that allows the variant stores to be matched will cause every occurrence of stores in the document to be treated equivalent to the query term store. Thus, a document containing the fragment reading a book in coffee stores will be matched, causing many wrong documents to be selected. Although we hope the ranking function can correctly handle these, with many more candidates to rank, the risk of making mistakes increases. To alleviate these two problems, we propose a context sensitive stemming approach for Web search. Our solution consists of two context sensitive analysis, one on the query side and the other on the document side. On the query side, we propose a statistical language modeling based approach to predict which word variants are better forms than the original word for search purpose and expanding the query with only those forms. On the document side, we propose a conservative context sensitive matching for the transformed word variants, only matching document occurrences in the context of other terms in the query. Our model is simple yet effective and efficient, making it feasible to be used in real commercial Web search engines. We use pluralization handling as a running example for our stemming approach. The motivation for using pluralization handling as an example is to show that even such simple stemming, if handled correctly, can give significant benefits to search relevance. As far as we know, no previous research has systematically investigated the usage of pluralization in Web search. As we have to point out, the method we propose is not limited to pluralization handling, it is a general stemming technique, and can also be applied to general query expansion. Experiments on general stemming yield additional significant improvements over pluralization handling for long queries, although details will not be reported in this paper. In the rest of the paper, we first present the related work and distinguish our method from previous work in Section 2. We describe the details of the context sensitive stemming approach in Section 3. We then perform extensive experiments on a major Web search engine to support our claims in Section 4, followed by discussions in Section 5. Finally, we conclude the paper in Section 6. 2. RELATED WORK Stemming is a long studied technology. Many stemmers have been developed, such as the Lovins stemmer [16] and the Porter stemmer [18]. The Porter stemmer is widely used due to its simplicity and effectiveness in many applications. However, the Porter stemming makes many mistakes because its simple rules cannot fully describe English morphology. Corpus analysis is used to improve Porter stemmer [26] by creating equivalence classes for words that are morphologically similar and occur in similar context as measured by expected mutual information [23]. We use a similar corpus based approach for stemming by computing the similarity between two words based on their distributional context features which can be more than just adjacent words [15], and then only keep the morphologically similar words as candidates. Using stemming in information retrieval is also a well known technique [8, 10]. However, the effectiveness of stemming for English query systems was previously reported to be rather limited. Lennon et al. [17] compared the Lovins and Porter algorithms and found little improvement in retrieval performance. Later, Harman [9] compares three general stemming techniques in text retrieval experiments including pluralization handing (called S stemmer in the paper). They also proposed selective stemming based on query length and term importance, but no positive results were reported. On the other hand, Krovetz [14] performed comparisons over small numbers of documents (from 400 to 12k) and showed dramatic precision improvement (up to 45%). However, due to the limited number of tested queries (less than 100) and the small size of the collection, the results are hard to generalize to Web search. These mixed results, mostly failures, led early IR researchers to deem stemming irrelevant in general for English [4], although recent research has shown stemming has greater benefits for retrieval in other languages [2]. We suspect the previous failures were mainly due to the two problems we mentioned in the introduction. Blind stemming, or a simple query length based selective stemming as used in [9] is not enough. Stemming has to be decided on case by case basis, not only at the query level but also at the document level. As we will show, if handled correctly, significant improvement can be achieved. A more general problem related to stemming is query reformulation [3, 12] and query expansion which expands words not only with word variants [7, 22, 24, 25]. To decide which expanded words to use, people often use pseudorelevance feedback techniquesthat send the original query to a search engine and retrieve the top documents, extract relevant words from these top documents as additional query words, and resubmit the expanded query again [21]. This normally requires sending a query multiple times to search engine and it is not cost effective for processing the huge amount of queries involved in Web search. In addition, query expansion, including query reformulation [3, 12], has a high risk of changing the user intent (called query drift). Since the expanded words may have different meanings, adding them to the query could potentially change the intent of the original query. Thus query expansion based on pseudorelevance and query reformulation can provide suggestions to users for interactive refinement but can hardly be directly used for Web search. On the other hand, stemming is much more conservative since most of the time, stemming preserves the original search intent. While most work on query expansion focuses on recall enhancement, our work focuses on increasing both recall and precision. The increase on recall is obvious. With quality stemming, good documents which were not selected before stemming will be pushed up and those low quality documents will be degraded. On selective query expansion, Cronen-Townsend et al. [6] proposed a method for selective query expansion based on comparing the Kullback-Leibler divergence of the results from the unexpanded query and the results from the expanded query. This is similar to the relevance feedback in the sense that it requires multiple passes retrieval. If a word can be expanded into several words, it requires running this process multiple times to decide which expanded word is useful. It is expensive to deploy this in production Web search engines. Our method predicts the quality of expansion based on offline information without sending the query to a search engine. In summary, we propose a novel approach to attack an old, yet still important and challenging problem for Web search - stemming. Our approach is unique in that it performs predictive stemming on a per query basis without relevance feedback from the Web, using the context of the variants in documents to preserve precision. It``s simple, yet very efficient and effective, making real time stemming feasible for Web search. Our results will affirm researchers that stemming is indeed very important to large scale information retrieval. 3. CONTEXT SENSITIVE STEMMING 3.1 Overview Our system has four components as illustrated in Figure 1: candidate generation, query segmentation and head word detection, context sensitive query stemming and context sensitive document matching. Candidate generation (component 1) is performed offline and generated candidates are stored in a dictionary. For an input query, we first segment query into concepts and detect the head word for each concept (component 2). We then use statistical language modeling to decide whether a particular variant is useful (component 3), and finally for the expanded variants, we perform context sensitive document matching (component 4). Below we discuss each of the components in more detail. Component 4: context sensitive document matching Input Query: and head word detection Component 2: segment Component 1: candidate generation comparisons −> comparison Component 3: selective word expansion decision: comparisons −> comparison example: hotel price comparisons output: ``hotel'' ``comparisons'' hotel −> hotels Figure 1: System Components 3.2 Expansion candidate generation One of the ways to generate candidates is using the Porter stemmer [18]. The Porter stemmer simply uses morphological rules to convert a word to its base form. It has no knowledge of the semantic meaning of the words and sometimes makes serious mistakes, such as executive to execution, news to new, and paste to past. A more conservative way is based on using corpus analysis to improve the Porter stemmer results [26]. The corpus analysis we do is based on word distributional similarity [15]. The rationale of using distributional word similarity is that true variants tend to be used in similar contexts. In the distributional word similarity calculation, each word is represented with a vector of features derived from the context of the word. We use the bigrams to the left and right of the word as its context features, by mining a huge Web corpus. The similarity between two words is the cosine similarity between the two corresponding feature vectors. The top 20 similar words to develop is shown in the following table. rank candidate score rank candidate score 0 develop 1 10 berts 0.119 1 developing 0.339 11 wads 0.116 2 developed 0.176 12 developer 0.107 3 incubator 0.160 13 promoting 0.100 4 develops 0.150 14 developmental 0.091 5 development 0.148 15 reengineering 0.090 6 tutoring 0.138 16 build 0.083 7 analyzing 0.128 17 construct 0.081 8 developement 0.128 18 educational 0.081 9 automation 0.126 19 institute 0.077 Table 1: Top 20 most similar candidates to word develop. Column score is the similarity score. To determine the stemming candidates, we apply a few Porter stemmer [18] morphological rules to the similarity list. After applying these rules, for the word develop, the stemming candidates are developing, developed, develops, development, developement, developer, developmental. For the pluralization handling purpose, only the candidate develops is retained. One thing we note from observing the distributionally similar words is that they are closely related semantically. These words might serve as candidates for general query expansion, a topic we will investigate in the future. 3.3 Segmentation and headword identification For long queries, it is quite important to detect the concepts in the query and the most important words for those concepts. We first break a query into segments, each segment representing a concept which normally is a noun phrase. For each of the noun phrases, we then detect the most important word which we call the head word. Segmentation is also used in document sensitive matching (section 3.5) to enforce proximity. To break a query into segments, we have to define a criteria to measure the strength of the relation between words. One effective method is to use mutual information as an indicator on whether or not to split two words [19]. We use a log of 25M queries and collect the bigram and unigram frequencies from it. For every incoming query, we compute the mutual information of two adjacent words; if it passes a predefined threshold, we do not split the query between those two words and move on to next word. We continue this process until the mutual information between two words is below the threshold, then create a concept boundary here. Table 2 shows some examples of query segmentation. The ideal way of finding the head word of a concept is to do syntactic parsing to determine the dependency structure of the query. Query parsing is more difficult than sentence [running shoe] [best] [new york] [medical schools] [pictures] [of] [white house] [cookies] [in] [san francisco] [hotel] [price comparison] Table 2: Query segmentation: a segment is bracketed. parsing since many queries are not grammatical and are very short. Applying a parser trained on sentences from documents to queries will have poor performance. In our solution, we just use simple heuristics rules, and it works very well in practice for English. For an English noun phrase, the head word is typically the last nonstop word, unless the phrase is of a particular pattern, like XYZ of/in/at/from UVW. In such cases, the head word is typically the last nonstop word of XYZ. 3.4 Context sensitive word expansion After detecting which words are the most important words to expand, we have to decide whether the expansions will be useful. Our statistics show that about half of the queries can be transformed by pluralization via naive stemming. Among this half, about 25% of the queries improve relevance when transformed, the majority (about 50%) do not change their top 5 results, and the remaining 25% perform worse. Thus, it is extremely important to identify which queries should not be stemmed for the purpose of maximizing relevance improvement and minimizing stemming cost. In addition, for a query with multiple words that can be transformed, or a word with multiple variants, not all of the expansions are useful. Taking query hotel price comparison as an example, we decide that hotel and price comparison are two concepts. Head words hotel and comparison can be expanded to hotels and comparisons. Are both transformations useful? To test whether an expansion is useful, we have to know whether the expanded query is likely to get more relevant documents from the Web, which can be quantified by the probability of the query occurring as a string on the Web. The more likely a query to occur on the Web, the more relevant documents this query is able to return. Now the whole problem becomes how to calculate the probability of query to occur on the Web. Calculating the probability of string occurring in a corpus is a well known language modeling problem. The goal of language modeling is to predict the probability of naturally occurring word sequences, s = w1w2...wN ; or more simply, to put high probability on word sequences that actually occur (and low probability on word sequences that never occur). The simplest and most successful approach to language modeling is still based on the n-gram model. By the chain rule of probability one can write the probability of any word sequence as Pr(w1w2...wN ) = NY i=1 Pr(wi|w1...wi−1) (1) An n-gram model approximates this probability by assuming that the only words relevant to predicting Pr(wi|w1...wi−1) are the previous n − 1 words; i.e. Pr(wi|w1...wi−1) = Pr(wi|wi−n+1...wi−1) A straightforward maximum likelihood estimate of n-gram probabilities from a corpus is given by the observed frequency of each of the patterns Pr(wi|wi−n+1...wi−1) = #(wi−n+1...wi) #(wi−n+1...wi−1) (2) where #(.) denotes the number of occurrences of a specified gram in the training corpus. Although one could attempt to use simple n-gram models to capture long range dependencies in language, attempting to do so directly immediately creates sparse data problems: Using grams of length up to n entails estimating the probability of Wn events, where W is the size of the word vocabulary. This quickly overwhelms modern computational and data resources for even modest choices of n (beyond 3 to 6). Also, because of the heavy tailed nature of language (i.e. Zipf``s law) one is likely to encounter novel n-grams that were never witnessed during training in any test corpus, and therefore some mechanism for assigning non-zero probability to novel n-grams is a central and unavoidable issue in statistical language modeling. One standard approach to smoothing probability estimates to cope with sparse data problems (and to cope with potentially missing n-grams) is to use some sort of back-off estimator. Pr(wi|wi−n+1...wi−1) = 8 >>< >>: ˆPr(wi|wi−n+1...wi−1), if #(wi−n+1...wi) > 0 β(wi−n+1...wi−1) × Pr(wi|wi−n+2...wi−1), otherwise (3) where ˆPr(wi|wi−n+1...wi−1) = discount #(wi−n+1...wi) #(wi−n+1...wi−1) (4) is the discounted probability and β(wi−n+1...wi−1) is a normalization constant β(wi−n+1...wi−1) = 1 − X x∈(wi−n+1...wi−1x) ˆPr(x|wi−n+1...wi−1) 1 − X x∈(wi−n+1...wi−1x) ˆPr(x|wi−n+2...wi−1) (5) The discounted probability (4) can be computed with different smoothing techniques, including absolute smoothing, Good-Turing smoothing, linear smoothing, and Witten-Bell smoothing [5]. We used absolute smoothing in our experiments. Since the likelihood of a string, Pr(w1w2...wN ), is a very small number and hard to interpret, we use entropy as defined below to score the string. Entropy = − 1 N log2 Pr(w1w2...wN ) (6) Now getting back to the example of the query hotel price comparisons, there are four variants of this query, and the entropy of these four candidates are shown in Table 3. We can see that all alternatives are less likely than the input query. It is therefore not useful to make an expansion for this query. On the other hand, if the input query is hotel price comparisons which is the second alternative in the table, then there is a better alternative than the input query, and it should therefore be expanded. To tolerate the variations in probability estimation, we relax the selection criterion to those query alternatives if their scores are within a certain distance (10% in our experiments) to the best score. Query variations Entropy hotel price comparison 6.177 hotel price comparisons 6.597 hotels price comparison 6.937 hotels price comparisons 7.360 Table 3: Variations of query hotel price comparison ranked by entropy score, with the original query in bold face. 3.5 Context sensitive document matching Even after we know which word variants are likely to be useful, we have to be conservative in document matching for the expanded variants. For the query hotel price comparisons, we decided that word comparisons is expanded to include comparison. However, not every occurrence of comparison in the document is of interest. A page which is about comparing customer service can contain all of the words hotel price comparisons comparison. This page is not a good page for the query. If we accept matches of every occurrence of comparison, it will hurt retrieval precision and this is one of the main reasons why most stemming approaches do not work well for information retrieval. To address this problem, we have a proximity constraint that considers the context around the expanded variant in the document. A variant match is considered valid only if the variant occurs in the same context as the original word does. The context is the left or the right non-stop segments 1 of the original word. Taking the same query as an example, the context of comparisons is price. The expanded word comparison is only valid if it is in the same context of comparisons, which is after the word price. Thus, we should only match those occurrences of comparison in the document if they occur after the word price. Considering the fact that queries and documents may not represent the intent in exactly the same way, we relax this proximity constraint to allow variant occurrences within a window of some fixed size. If the expanded word comparison occurs within the context of price within a window, it is considered valid. The smaller the window size is, the more restrictive the matching. We use a window size of 4, which typically captures contexts that include the containing and adjacent noun phrases. 4. EXPERIMENTAL EVALUATION 4.1 Evaluation metrics We will measure both relevance improvement and the stemming cost required to achieve the relevance. 1 a context segment can not be a single stop word. 4.1.1 Relevance measurement We use a variant of the average Discounted Cumulative Gain (DCG), a recently popularized scheme to measure search engine relevance [1, 11]. Given a query and a ranked list of K documents (K is set to 5 in our experiments), the DCG(K) score for this query is calculated as follows: DCG(K) = KX k=1 gk log2(1 + k) . (7) where gk is the weight for the document at rank k. Higher degree of relevance corresponds to a higher weight. A page is graded into one of the five scales: Perfect, Excellent, Good, Fair, Bad, with corresponding weights. We use dcg to represent the average DCG(5) over a set of test queries. 4.1.2 Stemming cost Another metric is to measure the additional cost incurred by stemming. Given the same level of relevance improvement, we prefer a stemming method that has less additional cost. We measure this by the percentage of queries that are actually stemmed, over all the queries that could possibly be stemmed. 4.2 Data preparation We randomly sample 870 queries from a three month query log, with 290 from each month. Among all these 870 queries, we remove all misspelled queries since misspelled queries are not of interest to stemming. We also remove all one word queries since stemming one word queries without context has a high risk of changing query intent, especially for short words. In the end, we have 529 correctly spelled queries with at least 2 words. 4.3 Naive stemming for Web search Before explaining the experiments and results in detail, we``d like to describe the traditional way of using stemming for Web search, referred as the naive model. This is to treat every word variant equivalent for all possible words in the query. The query book store will be transformed into (book OR books)(store OR stores) when limiting stemming to pluralization handling only, where OR is an operator that denotes the equivalence of the left and right arguments. 4.4 Experimental setup The baseline model is the model without stemming. We first run the naive model to see how well it performs over the baseline. Then we improve the naive stemming model by document sensitive matching, referred as document sensitive matching model. This model makes the same stemming as the naive model on the query side, but performs conservative matching on the document side using the strategy described in section 3.5. The naive model and document sensitive matching model stem the most queries. Out of the 529 queries, there are 408 queries that they stem, corresponding to 46.7% query traffic (out of a total of 870). We then further improve the document sensitive matching model from the query side with selective word stemming based on statistical language modeling (section 3.4), referred as selective stemming model. Based on language modeling prediction, this model stems only a subset of the 408 queries stemmed by the document sensitive matching model. We experiment with unigram language model and bigram language model. Since we only care how much we can improve the naive model, we will only use these 408 queries (all the queries that are affected by the naive stemming model) in the experiments. To get a sense of how these models perform, we also have an oracle model that gives the upper-bound performance a stemmer can achieve on this data. The oracle model only expands a word if the stemming will give better results. To analyze the pluralization handling influence on different query categories, we divide queries into short queries and long queries. Among the 408 queries stemmed by the naive model, there are 272 short queries with 2 or 3 words, and 136 long queries with at least 4 words. 4.5 Results We summarize the overall results in Table 4, and present the results on short queries and long queries separately in Table 5. Each row in Table 4 is a stemming strategy described in section 4.4. The first column is the name of the strategy. The second column is the number of queries affected by this strategy; this column measures the stemming cost, and the numbers should be low for the same level of dcg. The third column is the average dcg score over all tested queries in this category (including the ones that were not stemmed by the strategy). The fourth column is the relative improvement over the baseline, and the last column is the p-value of Wilcoxon significance test. There are several observations about the results. We can see the naively stemming only obtains a statistically insignificant improvement of 1.5%. Looking at Table 5, it gives an improvement of 2.7% on short queries. However, it also hurts long queries by -2.4%. Overall, the improvement is canceled out. The reason that it improves short queries is that most short queries only have one word that can be stemmed. Thus, blindly pluralizing short queries is relatively safe. However for long queries, most queries can have multiple words that can be pluralized. Expanding all of them without selection will significantly hurt precision. Document context sensitive stemming gives a significant lift to the performance, from 2.7% to 4.2% for short queries and from -2.4% to -1.6% for long queries, with an overall lift from 1.5% to 2.8%. The improvement comes from the conservative context sensitive document matching. An expanded word is valid only if it occurs within the context of original query in the document. This reduces many spurious matches. However, we still notice that for long queries, context sensitive stemming is not able to improve performance because it still selects too many documents and gives the ranking function a hard problem. While the chosen window size of 4 works the best amongst all the choices, it still allows spurious matches. It is possible that the window size needs to be chosen on a per query basis to ensure tighter proximity constraints for different types of noun phrases. Selective word pluralization further helps resolving the problem faced by document context sensitive stemming. It does not stem every word that places all the burden on the ranking algorithm, but tries to eliminate unnecessary stemming in the first place. By predicting which word variants are going to be useful, we can dramatically reduce the number of stemmed words, thus improving both the recall and the precision. With the unigram language model, we can reduce the stemming cost by 26.7% (from 408/408 to 300/408) and lift the overall dcg improvement from 2.8% to 3.4%. In particular, it gives significant improvements on long queries. The dcg gain is turned from negative to positive, from −1.6% to 1.1%. This confirms our hypothesis that reducing unnecessary word expansion leads to precision improvement. For short queries too, we observe both dcg improvement and stemming cost reduction with the unigram language model. The advantages of predictive word expansion with a language model is further boosted with a better bigram language model. The overall dcg gain is lifted from 3.4% to 3.9%, and stemming cost is dramatically reduced from 408/408 to 250/408, corresponding to only 29% of query traffic (250 out of 870) and an overall 1.8% dcg improvement overall all query traffic. For short queries, bigram language model improves the dcg gain from 4.4% to 4.7%, and reduces stemming cost from 272/272 to 150/272. For long queries, bigram language model improves dcg gain from 1.1% to 2.5%, and reduces stemming cost from 136/136 to 100/136. We observe that the bigram language model gives a larger lift for long queries. This is because the uncertainty in long queries is larger and a more powerful language model is needed. We hypothesize that a trigram language model would give a further lift for long queries and leave this for future investigation. Considering the tight upper-bound 2 on the improvement to be gained from pluralization handling (via the oracle model), the current performance on short queries is very satisfying. For short queries, the dcg gain upper-bound is 6.3% for perfect pluralization handling, our current gain is 4.7% with a bigram language model. For long queries, the dcg gain upper-bound is 4.6% for perfect pluralization handling, our current gain is 2.5% with a bigram language model. We may gain additional benefit with a more powerful language model for long queries. However, the difficulties of long queries come from many other aspects including the proximity and the segmentation problem. These problems have to be addressed separately. Looking at the the upper-bound of overhead reduction for oracle stemming, 75% (308/408) of the naive stemmings are wasteful. We currently capture about half of them. Further reduction of the overhead requires sacrificing the dcg gain. Now we can compare the stemming strategies from a different aspect. Instead of looking at the influence over all queries as we described above, Table 6 summarizes the dcg improvements over the affected queries only. We can see that the number of affected queries decreases as the stemming strategy becomes more accurate (dcg improvement). For the bigram language model, over the 250/408 stemmed queries, the dcg improvement is 6.1%. An interesting observation is the average dcg decreases with a better model, which indicates a better stemming strategy stems more difficult queries (low dcg queries). 5. DISCUSSIONS 5.1 Language models from query vs. from Web As we mentioned in Section 1, we are trying to predict the probability of a string occurring on the Web. The language model should describe the occurrence of the string on the Web. However, the query log is also a good resource. 2 Note that this upperbound is for pluralization handling only, not for general stemming. General stemming gives a 8% upperbound, which is quite substantial in terms of our metrics. Affected Queries dcg dcg Improvement p-value baseline 0/408 7.102 N/A N/A naive model 408/408 7.206 1.5% 0.22 document context sensitive model 408/408 7.302 2.8% 0.014 selective model: unigram LM 300/408 7.321 3.4% 0.001 selective model: bigram LM 250/408 7.381 3.9% 0.001 oracle model 100/408 7.519 5.9% 0.001 Table 4: Results comparison of different stemming strategies over all queries affected by naive stemming Short Query Results Affected Queries dcg Improvement p-value baseline 0/272 N/A N/A naive model 272/272 2.7% 0.48 document context sensitive model 272/272 4.2% 0.002 selective model: unigram LM 185/272 4.4% 0.001 selective model: bigram LM 150/272 4.7% 0.001 oracle model 71/272 6.3% 0.001 Long Query Results Affected Queries dcg Improvement p-value baseline 0/136 N/A N/A naive model 136/136 -2.4% 0.25 document context sensitive model 136/136 -1.6% 0.27 selective model: unigram LM 115/136 1.1% 0.001 selective model: bigram LM 100/136 2.5% 0.001 oracle model 29/136 4.6% 0.001 Table 5: Results comparison of different stemming strategies overall short queries and long queries Users reformulate a query using many different variants to get good results. To test the hypothesis that we can learn reliable transformation probabilities from the query log, we trained a language model from the same query top 25M queries as used to learn segmentation, and use that for prediction. We observed a slight performance decrease compared to the model trained on Web frequencies. In particular, the performance for unigram LM was not affected, but the dcg gain for bigram LM changed from 4.7% to 4.5% for short queries. Thus, the query log can serve as a good approximation of the Web frequencies. 5.2 How linguistics helps Some linguistic knowledge is useful in stemming. For the pluralization handling case, pluralization and de-pluralization is not symmetric. A plural word used in a query indicates a special intent. For example, the query new york hotels is looking for a list of hotels in new york, not the specific new york hotel which might be a hotel located in California. A simple equivalence of hotel to hotels might boost a particular page about new york hotel to top rank. To capture this intent, we have to make sure the document is a general page about hotels in new york. We do this by requiring that the plural word hotels appears in the document. On the other hand, converting a singular word to plural is safer since a general purpose page normally contains specific information. We observed a slight overall dcg decrease, although not statistically significant, for document context sensitive stemming if we do not consider this asymmetric property. 5.3 Error analysis One type of mistakes we noticed, though rare but seriously hurting relevance, is the search intent change after stemming. Generally speaking, pluralization or depluralization keeps the original intent. However, the intent could change in a few cases. For one example of such a query, job at apple, we pluralize job to jobs. This stemming makes the original query ambiguous. The query job OR jobs at apple has two intents. One is the employment opportunities at apple, and another is a person working at Apple, Steve Jobs, who is the CEO and co-founder of the company. Thus, the results after query stemming returns Steve Jobs as one of the results in top 5. One solution is performing results set based analysis to check if the intent is changed. This is similar to relevance feedback and requires second phase ranking. A second type of mistakes is the entity/concept recognition problem, These include two kinds. One is that the stemmed word variant now matches part of an entity or concept. For example, query cookies in san francisco is pluralized to cookies OR cookie in san francisco. The results will match cookie jar in san francisco. Although cookie still means the same thing as cookies, cookie jar is a different concept. Another kind is the unstemmed word matches an entity or concept because of the stemming of the other words. For example, quote ICE is pluralized to quote OR quotes ICE. The original intent for this query is searching for stock quote for ticker ICE. However, we noticed that among the top results, one of the results is Food quotes: Ice cream. This is matched because of Affected Queries old dcg new dcg dcg Improvement naive model 408/408 7.102 7.206 1.5% document context sensitive model 408/408 7.102 7.302 2.8% selective model: unigram LM 300/408 5.904 6.187 4.8% selective model: bigram LM 250/408 5.551 5.891 6.1% Table 6: Results comparison over the stemmed queries only: column old/new dcg is the dcg score over the affected queries before/after applying stemming the pluralized word quotes. The unchanged word ICE matches part of the noun phrase ice cream here. To solve this kind of problem, we have to analyze the documents and recognize cookie jar and ice cream as concepts instead of two independent words. A third type of mistakes occurs in long queries. For the query bar code reader software, two words are pluralized. code to codes and reader to readers. In fact, bar code reader in the original query is a strong concept and the internal words should not be changed. This is the segmentation and entity and noun phrase detection problem in queries, which we actively are attacking. For long queries, we should correctly identify the concepts in the query, and boost the proximity for the words within a concept. 6. CONCLUSIONS AND FUTURE WORK We have presented a simple yet elegant way of stemming for Web search. It improves naive stemming in two aspects: selective word expansion on the query side and conservative word occurrence matching on the document side. Using pluralization handling as an example, experiments on a major Web search engine data show it significantly improves the Web relevance and reduces the stemming cost. It also significantly improves Web click through rate (details not reported in the paper). For the future work, we are investigating the problems we identified in the error analysis section. These include: entity and noun phrase matching mistakes, and improved segmentation. 7. REFERENCES [1] E. Agichtein, E. Brill, and S. T. Dumais. Improving Web Search Ranking by Incorporating User Behavior Information. In SIGIR, 2006. [2] E. Airio. Word Normalization and Decompounding in Mono- and Bilingual IR. Information Retrieval, 9:249-271, 2006. [3] P. Anick. Using Terminological Feedback for Web Search Refinement: a Log-based Study. In SIGIR, 2003. [4] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. ACM Press/Addison Wesley, 1999. [5] S. Chen and J. Goodman. An Empirical Study of Smoothing Techniques for Language Modeling. Technical Report TR-10-98, Harvard University, 1998. [6] S. Cronen-Townsend, Y. Zhou, and B. Croft. A Framework for Selective Query Expansion. In CIKM, 2004. [7] H. Fang and C. Zhai. Semantic Term Matching in Axiomatic Approaches to Information Retrieval. In SIGIR, 2006. [8] W. B. Frakes. Term Conflation for Information Retrieval. In C. J. Rijsbergen, editor, Research and Development in Information Retrieval, pages 383-389. Cambridge University Press, 1984. [9] D. Harman. How Effective is Suffixing? JASIS, 42(1):7-15, 1991. [10] D. Hull. Stemming Algorithms - A Case Study for Detailed Evaluation. JASIS, 47(1):70-84, 1996. [11] K. Jarvelin and J. Kekalainen. Cumulated Gain-Based Evaluation Evaluation of IR Techniques. ACM TOIS, 20:422-446, 2002. [12] R. Jones, B. Rey, O. Madani, and W. Greiner. Generating Query Substitutions. In WWW, 2006. [13] W. Kraaij and R. Pohlmann. Viewing Stemming as Recall Enhancement. In SIGIR, 1996. [14] R. Krovetz. Viewing Morphology as an Inference Process. In SIGIR, 1993. [15] D. Lin. Automatic Retrieval and Clustering of Similar Words. In COLING-ACL, 1998. [16] J. B. Lovins. Development of a Stemming Algorithm. Mechanical Translation and Computational Linguistics, II:22-31, 1968. [17] M. Lennon and D. Peirce and B. Tarry and P. Willett. An Evaluation of Some Conflation Algorithms for Information Retrieval. Journal of Information Science, 3:177-188, 1981. [18] M. Porter. An Algorithm for Suffix Stripping. Program, 14(3):130-137, 1980. [19] K. M. Risvik, T. Mikolajewski, and P. Boros. Query Segmentation for Web Search. In WWW, 2003. [20] S. E. Robertson. On Term Selection for Query Expansion. Journal of Documentation, 46(4):359-364, 1990. [21] G. Salton and C. Buckley. Improving Retrieval Performance by Relevance Feedback. JASIS, 41(4):288 - 297, 1999. [22] R. Sun, C.-H. Ong, and T.-S. Chua. Mining Dependency Relations for Query Expansion in Passage Retrieval. In SIGIR, 2006. [23] C. Van Rijsbergen. Information Retrieval. Butterworths, second version, 1979. [24] B. V´elez, R. Weiss, M. A. Sheldon, and D. K. Gifford. Fast and Effective Query Refinement. In SIGIR, 1997. [25] J. Xu and B. Croft. Query Expansion using Local and Global Document Analysis. In SIGIR, 1996. [26] J. Xu and B. Croft. Corpus-based Stemming using Cooccurrence of Word Variants. ACM TOIS, 16 (1):61-81, 1998.
Context Sensitive Stemming for Web Search ABSTRACT Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms. Although it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation. In this paper, we propose a context sensitive stemming method that addresses these two issues. Two unique properties make our approach feasible for Web Search. First, based on statistical language modeling, we perform context sensitive analysis on the query side. We accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine. This dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time. Second, our approach performs a context sensitive document matching for those expanded variants. This conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision. Using word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic. 1. INTRODUCTION Web search has now become a major tool in our daily lives for information seeking. One of the important issues in Web search is that user queries are often not best formulated to get optimal results. For example, "running shoe" is a query that occurs frequently in query logs. However, the query "running shoes" is much more likely to give better search results than the original query because documents matching the intent of this query usually contain the words "running shoes". Correctly formulating a query requires the user to accurately predict which word form is used in the documents that best satisfy his or her information needs. This is difficult even for experienced users, and especially difficult for non-native speakers. One traditional solution is to use stemming [16, 18], the process of transforming inflected or derived words to their root form so that a search term will match and retrieve documents containing all forms of the term. Thus, the word "run" will match "running", "ran", "runs", and "shoe" will match "shoes" and "shoeing". Stemming can be done either on the terms in a document during indexing (and applying the same transformation to the query terms during query processing) or by expanding the query with the variants during query processing. Stemming during indexing allows very little flexibility during query processing, while stemming by query expansion allows handling each query differently, and hence is preferred. Although traditional stemming increases recall by matching word variants [13], it can reduce precision by retrieving too many documents that have been incorrectly matched. When examining the results of applying stemming to a large number of queries, one usually finds that nearly equal numbers of queries are helped and hurt by the technique [6]. In addition, it reduces system performance because the search engine has to match all the word variants. As we will show in the experiments, this is true even if we simplify stemming to pluralization handling, which is the process of converting a word from its plural to singular form, or vice versa. Thus, one needs to be very cautious when using stemming in Web search engines. One problem of traditional stemming is its blind transformation of all query terms, that is, it always performs the same transformation for the same query word without considering the context of the word. For example, the word "book" has four forms "book, books, booking, booked", and "store" has four forms "store, stores, storing, stored". For the query "book store", expanding both words to all of their variants significantly increases computation cost and hurts precision, since not all of the variants are useful for this query. Transforming "book store" to match "book stores" is fine, but matching "book storing" or "booking store" is not. A weighting method that gives variant words smaller weights alleviates the problems to a certain extent if the weights accurately reflect the importance of the variant in this particular query. However uniform weighting is not going to work and a query dependent weighting is still a challenging unsolved problem [20]. A second problem of traditional stemming is its blind matching of all occurrences in documents. For the query "book store", a transformation that allows the variant "stores" to be matched will cause every occurrence of "stores" in the document to be treated equivalent to the query term "store". Thus, a document containing the fragment "reading a book in coffee stores" will be matched, causing many wrong documents to be selected. Although we hope the ranking function can correctly handle these, with many more candidates to rank, the risk of making mistakes increases. To alleviate these two problems, we propose a context sensitive stemming approach for Web search. Our solution consists of two context sensitive analysis, one on the query side and the other on the document side. On the query side, we propose a statistical language modeling based approach to predict which word variants are better forms than the original word for search purpose and expanding the query with only those forms. On the document side, we propose a conservative context sensitive matching for the transformed word variants, only matching document occurrences in the context of other terms in the query. Our model is simple yet effective and efficient, making it feasible to be used in real commercial Web search engines. We use pluralization handling as a running example for our stemming approach. The motivation for using pluralization handling as an example is to show that even such simple stemming, if handled correctly, can give significant benefits to search relevance. As far as we know, no previous research has systematically investigated the usage of pluralization in Web search. As we have to point out, the method we propose is not limited to pluralization handling, it is a general stemming technique, and can also be applied to general query expansion. Experiments on general stemming yield additional significant improvements over pluralization handling for long queries, although details will not be reported in this paper. In the rest of the paper, we first present the related work and distinguish our method from previous work in Section 2. We describe the details of the context sensitive stemming approach in Section 3. We then perform extensive experiments on a major Web search engine to support our claims in Section 4, followed by discussions in Section 5. Finally, we conclude the paper in Section 6. 2. RELATED WORK Stemming is a long studied technology. Many stemmers have been developed, such as the Lovins stemmer [16] and the Porter stemmer [18]. The Porter stemmer is widely used due to its simplicity and effectiveness in many applications. However, the Porter stemming makes many mistakes because its simple rules cannot fully describe English morphology. Corpus analysis is used to improve Porter stemmer [26] by creating equivalence classes for words that are morphologically similar and occur in similar context as measured by expected mutual information [23]. We use a similar corpus based approach for stemming by computing the similarity between two words based on their distributional context features which can be more than just adjacent words [15], and then only keep the morphologically similar words as candidates. Using stemming in information retrieval is also a well known technique [8, 10]. However, the effectiveness of stemming for English query systems was previously reported to be rather limited. Lennon et al. [17] compared the Lovins and Porter algorithms and found little improvement in retrieval performance. Later, Harman [9] compares three general stemming techniques in text retrieval experiments including pluralization handing (called S stemmer in the paper). They also proposed selective stemming based on query length and term importance, but no positive results were reported. On the other hand, Krovetz [14] performed comparisons over small numbers of documents (from 400 to 12k) and showed dramatic precision improvement (up to 45%). However, due to the limited number of tested queries (less than 100) and the small size of the collection, the results are hard to generalize to Web search. These mixed results, mostly failures, led early IR researchers to deem stemming irrelevant in general for English [4], although recent research has shown stemming has greater benefits for retrieval in other languages [2]. We suspect the previous failures were mainly due to the two problems we mentioned in the introduction. Blind stemming, or a simple query length based selective stemming as used in [9] is not enough. Stemming has to be decided on case by case basis, not only at the query level but also at the document level. As we will show, if handled correctly, significant improvement can be achieved. A more general problem related to stemming is query reformulation [3, 12] and query expansion which expands words not only with word variants [7, 22, 24, 25]. To decide which expanded words to use, people often use pseudorelevance feedback techniquesthat send the original query to a search engine and retrieve the top documents, extract relevant words from these top documents as additional query words, and resubmit the expanded query again [21]. This normally requires sending a query multiple times to search engine and it is not cost effective for processing the huge amount of queries involved in Web search. In addition, query expansion, including query reformulation [3, 12], has a high risk of changing the user intent (called query drift). Since the expanded words may have different meanings, adding them to the query could potentially change the intent of the original query. Thus query expansion based on pseudorelevance and query reformulation can provide suggestions to users for interactive refinement but can hardly be directly used for Web search. On the other hand, stemming is much more conservative since most of the time, stemming preserves the original search intent. While most work on query expansion focuses on recall enhancement, our work focuses on increasing both recall and precision. The increase on recall is obvious. With quality stemming, good documents which were not selected before stemming will be pushed up and those low quality documents will be degraded. On selective query expansion, Cronen-Townsend et al. [6] proposed a method for selective query expansion based on comparing the Kullback-Leibler divergence of the results from the unexpanded query and the results from the expanded query. This is similar to the relevance feedback in the sense that it requires multiple passes retrieval. If a word can be expanded into several words, it requires running this process multiple times to decide which expanded word is useful. It is expensive to deploy this in production Web search engines. Our method predicts the quality of expansion based on offline information without sending the query to a search engine. In summary, we propose a novel approach to attack an old, yet still important and challenging problem for Web search--stemming. Our approach is unique in that it performs predictive stemming on a per query basis without relevance feedback from the Web, using the context of the variants in documents to preserve precision. It's simple, yet very efficient and effective, making real time stemming feasible for Web search. Our results will affirm researchers that stemming is indeed very important to large scale information retrieval. 3. CONTEXT SENSITIVE STEMMING 3.1 Overview Our system has four components as illustrated in Figure 1: candidate generation, query segmentation and head word detection, context sensitive query stemming and context sensitive document matching. Candidate generation (component 1) is performed offline and generated candidates are stored in a dictionary. For an input query, we first segment query into concepts and detect the head word for each concept (component 2). We then use statistical language modeling to decide whether a particular variant is useful (component 3), and finally for the expanded variants, we perform context sensitive document matching (component 4). Below we discuss each of the components in more detail. Figure 1: System Components 3.2 Expansion candidate generation One of the ways to generate candidates is using the Porter stemmer [18]. The Porter stemmer simply uses morphological rules to convert a word to its base form. It has no knowledge of the semantic meaning of the words and sometimes makes serious mistakes, such as "executive" to "execution", "news" to "new", and "paste" to "past". A more conservative way is based on using corpus analysis to improve the Porter stemmer results [26]. The corpus analysis we do is based on word distributional similarity [15]. The rationale of using distributional word similarity is that true variants tend to be used in similar contexts. In the distributional word similarity calculation, each word is represented with a vector of features derived from the context of the word. We use the bigrams to the left and right of the word as its context features, by mining a huge Web corpus. The similarity between two words is the cosine similarity between the two corresponding feature vectors. The top 20 similar words to "develop" is shown in the following table. Table 1: Top 20 most similar candidates to word "develop". Column score is the similarity score. To determine the stemming candidates, we apply a few Porter stemmer [18] morphological rules to the similarity list. After applying these rules, for the word "develop", the stemming candidates are "developing, developed, develops, development, developement, developer, developmental". For the pluralization handling purpose, only the candidate "develops" is retained. One thing we note from observing the distributionally similar words is that they are closely related semantically. These words might serve as candidates for general query expansion, a topic we will investigate in the future. 3.3 Segmentation and headword identification For long queries, it is quite important to detect the concepts in the query and the most important words for those concepts. We first break a query into segments, each segment representing a concept which normally is a noun phrase. For each of the noun phrases, we then detect the most important word which we call the head word. Segmentation is also used in document sensitive matching (section 3.5) to enforce proximity. To break a query into segments, we have to define a criteria to measure the strength of the relation between words. One effective method is to use mutual information as an indicator on whether or not to split two words [19]. We use a log of 25M queries and collect the bigram and unigram frequencies from it. For every incoming query, we compute the mutual information of two adjacent words; if it passes a predefined threshold, we do not split the query between those two words and move on to next word. We continue this process until the mutual information between two words is below the threshold, then create a concept boundary here. Table 2 shows some examples of query segmentation. The ideal way of finding the head word of a concept is to do syntactic parsing to determine the dependency structure of the query. Query parsing is more difficult than sentence Component 2: segment and head word detection output: "hotel" "comparisons" Component 1: candidate generation Table 2: Query segmentation: a segment is bracketed. parsing since many queries are not grammatical and are very short. Applying a parser trained on sentences from documents to queries will have poor performance. In our solution, we just use simple heuristics rules, and it works very well in practice for English. For an English noun phrase, the head word is typically the last nonstop word, unless the phrase is of a particular pattern, like "XYZ of/in/at / from UVW". In such cases, the head word is typically the last nonstop word of XYZ. 3.4 Context sensitive word expansion After detecting which words are the most important words to expand, we have to decide whether the expansions will be useful. Our statistics show that about half of the queries can be transformed by pluralization via naive stemming. Among this half, about 25% of the queries improve relevance when transformed, the majority (about 50%) do not change their top 5 results, and the remaining 25% perform worse. Thus, it is extremely important to identify which queries should not be stemmed for the purpose of maximizing relevance improvement and minimizing stemming cost. In addition, for a query with multiple words that can be transformed, or a word with multiple variants, not all of the expansions are useful. Taking query "hotel price comparison" as an example, we decide that hotel and price comparison are two concepts. Head words "hotel" and "comparison" can be expanded to "hotels" and "comparisons". Are both transformations useful? To test whether an expansion is useful, we have to know whether the expanded query is likely to get more relevant documents from the Web, which can be quantified by the probability of the query occurring as a string on the Web. The more likely a query to occur on the Web, the more relevant documents this query is able to return. Now the whole problem becomes how to calculate the probability of query to occur on the Web. Calculating the probability of string occurring in a corpus is a well known language modeling problem. The goal of language modeling is to predict the probability of naturally occurring word sequences, s = w1w2...wN; or more simply, to put high probability on word sequences that actually occur (and low probability on word sequences that never occur). The simplest and most successful approach to language modeling is still based on the n-gram model. By the chain rule of probability one can write the probability of any word sequence as An n-gram model approximates this probability by assuming that the only words relevant to predicting Pr (wilw1...wi-1) are the previous n--1 words; i.e. A straightforward maximum likelihood estimate of n-gram probabilities from a corpus is given by the observed frequency of each of the patterns where #(.) denotes the number of occurrences of a specified gram in the training corpus. Although one could attempt to use simple n-gram models to capture long range dependencies in language, attempting to do so directly immediately creates sparse data problems: Using grams of length up to n entails estimating the probability of Wn events, where W is the size of the word vocabulary. This quickly overwhelms modern computational and data resources for even modest choices of n (beyond 3 to 6). Also, because of the heavy tailed nature of language (i.e. Zipf's law) one is likely to encounter novel n-grams that were never witnessed during training in any test corpus, and therefore some mechanism for assigning non-zero probability to novel n-grams is a central and unavoidable issue in statistical language modeling. One standard approach to smoothing probability estimates to cope with sparse data problems (and to cope with potentially missing n-grams) is to use some sort of back-off estimator. where is the discounted probability and β (wi-n +1...wi-1) is a normalization constant The discounted probability (4) can be computed with different smoothing techniques, including absolute smoothing, Good-Turing smoothing, linear smoothing, and Witten-Bell smoothing [5]. We used absolute smoothing in our experiments. Since the likelihood of a string, Pr (w1w2...wN), is a very small number and hard to interpret, we use entropy as defined below to score the string. Now getting back to the example of the query "hotel price comparisons", there are four variants of this query, and the entropy of these four candidates are shown in Table 3. We can see that all alternatives are less likely than the input query. It is therefore not useful to make an expansion for this query. On the other hand, if the input query is "hotel price comparisons" which is the second alternative in the table, then there is a better alternative than the input query, and it should therefore be expanded. To tolerate the variations in probability estimation, we relax the selection criterion to those query alternatives if their scores are within a certain distance (10% in our experiments) to the best score. Table 3: Variations of query "hotel price comparison" ranked by entropy score, with the original query in bold face. 3.5 Context sensitive document matching Even after we know which word variants are likely to be useful, we have to be conservative in document matching for the expanded variants. For the query "hotel price comparisons", we decided that word "comparisons" is expanded to include "comparison". However, not every occurrence of "comparison" in the document is of interest. A page which is about comparing customer service can contain all of the words hotel price comparisons comparison. This page is not a good page for the query. If we accept matches of every occurrence of "comparison", it will hurt retrieval precision and this is one of the main reasons why most stemming approaches do not work well for information retrieval. To address this problem, we have a proximity constraint that considers the context around the expanded variant in the document. A variant match is considered valid only if the variant occurs in the same context as the original word does. The context is the left or the right non-stop segments 1 of the original word. Taking the same query as an example, the context of "comparisons" is "price". The expanded word "comparison" is only valid if it is in the same context of "comparisons", which is after the word "price". Thus, we should only match those occurrences of "comparison" in the document if they occur after the word "price". Considering the fact that queries and documents may not represent the intent in exactly the same way, we relax this proximity constraint to allow variant occurrences within a window of some fixed size. If the expanded word "comparison" occurs within the context of "price" within a window, it is considered valid. The smaller the window size is, the more restrictive the matching. We use a window size of 4, which typically captures contexts that include the containing and adjacent noun phrases. 4. EXPERIMENTAL EVALUATION 4.1 Evaluation metrics We will measure both relevance improvement and the stemming cost required to achieve the relevance. 4.1.1 Relevance measurement We use a variant of the average Discounted Cumulative Gain (DCG), a recently popularized scheme to measure search engine relevance [1, 11]. Given a query and a ranked list of K documents (K is set to 5 in our experiments), the DCG (K) score for this query is calculated as follows: where gk is the weight for the document at rank k. Higher degree of relevance corresponds to a higher weight. A page is graded into one of the five scales: Perfect, Excellent, Good, Fair, Bad, with corresponding weights. We use dcg to represent the average DCG (5) over a set of test queries. 4.1.2 Stemming cost Another metric is to measure the additional cost incurred by stemming. Given the same level of relevance improvement, we prefer a stemming method that has less additional cost. We measure this by the percentage of queries that are actually stemmed, over all the queries that could possibly be stemmed. 4.2 Data preparation We randomly sample 870 queries from a three month query log, with 290 from each month. Among all these 870 queries, we remove all misspelled queries since misspelled queries are not of interest to stemming. We also remove all one word queries since stemming one word queries without context has a high risk of changing query intent, especially for short words. In the end, we have 529 correctly spelled queries with at least 2 words. 4.3 Naive stemming for Web search Before explaining the experiments and results in detail, we'd like to describe the traditional way of using stemming for Web search, referred as the naive model. This is to treat every word variant equivalent for all possible words in the query. The query "book store" will be transformed into "(book OR books) (store OR stores)" when limiting stemming to pluralization handling only, where OR is an operator that denotes the equivalence of the left and right arguments. 4.4 Experimental setup The baseline model is the model without stemming. We first run the naive model to see how well it performs over the baseline. Then we improve the naive stemming model by document sensitive matching, referred as document sensitive matching model. This model makes the same stemming as the naive model on the query side, but performs conservative matching on the document side using the strategy described in section 3.5. The naive model and document sensitive matching model stem the most queries. Out of the 529 queries, there are 408 queries that they stem, corresponding to 46.7% query traffic (out of a total of 870). We then further improve the document sensitive matching model from the query side with selective word stemming based on statistical language modeling (section 3.4), referred as selective stemming model. Based on language modeling prediction, this model stems only a subset of the 408 queries stemmed by the document sensitive matching model. We experiment with unigram language model and bigram lan guage model. Since we only care how much we can improve the naive model, we will only use these 408 queries (all the queries that are affected by the naive stemming model) in the experiments. To get a sense of how these models perform, we also have an oracle model that gives the upper-bound performance a stemmer can achieve on this data. The oracle model only expands a word if the stemming will give better results. To analyze the pluralization handling influence on different query categories, we divide queries into short queries and long queries. Among the 408 queries stemmed by the naive model, there are 272 short queries with 2 or 3 words, and 136 long queries with at least 4 words. 4.5 Results We summarize the overall results in Table 4, and present the results on short queries and long queries separately in Table 5. Each row in Table 4 is a stemming strategy described in section 4.4. The first column is the name of the strategy. The second column is the number of queries affected by this strategy; this column measures the stemming cost, and the numbers should be low for the same level of dcg. The third column is the average dcg score over all tested queries in this category (including the ones that were not stemmed by the strategy). The fourth column is the relative improvement over the baseline, and the last column is the p-value of Wilcoxon significance test. There are several observations about the results. We can see the naively stemming only obtains a statistically insignificant improvement of 1.5%. Looking at Table 5, it gives an improvement of 2.7% on short queries. However, it also hurts long queries by -2.4%. Overall, the improvement is canceled out. The reason that it improves short queries is that most short queries only have one word that can be stemmed. Thus, blindly pluralizing short queries is relatively safe. However for long queries, most queries can have multiple words that can be pluralized. Expanding all of them without selection will significantly hurt precision. Document context sensitive stemming gives a significant lift to the performance, from 2.7% to 4.2% for short queries and from -2.4% to -1.6% for long queries, with an overall lift from 1.5% to 2.8%. The improvement comes from the conservative context sensitive document matching. An expanded word is valid only if it occurs within the context of original query in the document. This reduces many spurious matches. However, we still notice that for long queries, context sensitive stemming is not able to improve performance because it still selects too many documents and gives the ranking function a hard problem. While the chosen window size of 4 works the best amongst all the choices, it still allows spurious matches. It is possible that the window size needs to be chosen on a per query basis to ensure tighter proximity constraints for different types of noun phrases. Selective word pluralization further helps resolving the problem faced by document context sensitive stemming. It does not stem every word that places all the burden on the ranking algorithm, but tries to eliminate unnecessary stemming in the first place. By predicting which word variants are going to be useful, we can dramatically reduce the number of stemmed words, thus improving both the recall and the precision. With the unigram language model, we can reduce the stemming cost by 26.7% (from 408/408 to 300/408) and lift the overall dcg improvement from 2.8% to 3.4%. In particular, it gives significant improvements on long queries. The dcg gain is turned from negative to positive, from − 1.6% to 1.1%. This confirms our hypothesis that reducing unnecessary word expansion leads to precision improvement. For short queries too, we observe both dcg improvement and stemming cost reduction with the unigram language model. The advantages of predictive word expansion with a language model is further boosted with a better bigram language model. The overall dcg gain is lifted from 3.4% to 3.9%, and stemming cost is dramatically reduced from 408/408 to 250/408, corresponding to only 29% of query traffic (250 out of 870) and an overall 1.8% dcg improvement overall all query traffic. For short queries, bigram language model improves the dcg gain from 4.4% to 4.7%, and reduces stemming cost from 272/272 to 150/272. For long queries, bigram language model improves dcg gain from 1.1% to 2.5%, and reduces stemming cost from 136/136 to 100/136. We observe that the bigram language model gives a larger lift for long queries. This is because the uncertainty in long queries is larger and a more powerful language model is needed. We hypothesize that a trigram language model would give a further lift for long queries and leave this for future investigation. Considering the tight upper-bound 2 on the improvement to be gained from pluralization handling (via the oracle model), the current performance on short queries is very satisfying. For short queries, the dcg gain upper-bound is 6.3% for perfect pluralization handling, our current gain is 4.7% with a bigram language model. For long queries, the dcg gain upper-bound is 4.6% for perfect pluralization handling, our current gain is 2.5% with a bigram language model. We may gain additional benefit with a more powerful language model for long queries. However, the difficulties of long queries come from many other aspects including the proximity and the segmentation problem. These problems have to be addressed separately. Looking at the the upper-bound of overhead reduction for oracle stemming, 75% (308/408) of the naive stemmings are wasteful. We currently capture about half of them. Further reduction of the overhead requires sacrificing the dcg gain. Now we can compare the stemming strategies from a different aspect. Instead of looking at the influence over all queries as we described above, Table 6 summarizes the dcg improvements over the affected queries only. We can see that the number of affected queries decreases as the stemming strategy becomes more accurate (dcg improvement). For the bigram language model, over the 250/408 stemmed queries, the dcg improvement is 6.1%. An interesting observation is the average dcg decreases with a better model, which indicates a better stemming strategy stems more difficult queries (low dcg queries). 5. DISCUSSIONS 5.1 Language models from query vs. from Web As we mentioned in Section 1, we are trying to predict the probability of a string occurring on the Web. The language model should describe the occurrence of the string on the Web. However, the query log is also a good resource. 2Note that this upperbound is for pluralization handling only, not for general stemming. General stemming gives a 8% upperbound, which is quite substantial in terms of our metrics. Table 4: Results comparison of different stemming strategies over all queries affected by naive stemming Table 5: Results comparison of different stemming strategies overall short queries and long queries Users reformulate a query using many different variants to get good results. To test the hypothesis that we can learn reliable transformation probabilities from the query log, we trained a language model from the same query top 25M queries as used to learn segmentation, and use that for prediction. We observed a slight performance decrease compared to the model trained on Web frequencies. In particular, the performance for unigram LM was not affected, but the dcg gain for bigram LM changed from 4.7% to 4.5% for short queries. Thus, the query log can serve as a good approximation of the Web frequencies. 5.2 How linguistics helps Some linguistic knowledge is useful in stemming. For the pluralization handling case, pluralization and de-pluralization is not symmetric. A plural word used in a query indicates a special intent. For example, the query "new york hotels" is looking for a list of hotels in new york, not the specific "new york hotel" which might be a hotel located in California. A simple equivalence of "hotel" to "hotels" might boost a particular page about "new york hotel" to top rank. To capture this intent, we have to make sure the document is a general page about hotels in new york. We do this by requiring that the plural word "hotels" appears in the document. On the other hand, converting a singular word to plural is safer since a general purpose page normally contains specific information. We observed a slight overall dcg decrease, although not statistically significant, for document context sensitive stemming if we do not consider this asymmetric property. 5.3 Error analysis One type of mistakes we noticed, though rare but seriously hurting relevance, is the search intent change after stemming. Generally speaking, pluralization or depluralization keeps the original intent. However, the intent could change in a few cases. For one example of such a query, "job at apple", we pluralize "job" to "jobs". This stemming makes the original query ambiguous. The query "job OR jobs at apple" has two intents. One is the employment opportunities at apple, and another is a person working at Apple, Steve Jobs, who is the CEO and co-founder of the company. Thus, the results after query stemming returns "Steve Jobs" as one of the results in top 5. One solution is performing results set based analysis to check if the intent is changed. This is similar to relevance feedback and requires second phase ranking. A second type of mistakes is the entity/concept recognition problem, These include two kinds. One is that the stemmed word variant now matches part of an entity or concept. For example, query "cookies in san francisco" is pluralized to "cookies OR cookie in san francisco". The results will match "cookie jar in san francisco". Although "cookie" still means the same thing as "cookies", "cookie jar" is a different concept. Another kind is the unstemmed word matches an entity or concept because of the stemming of the other words. For example, "quote ICE" is pluralized to "quote OR quotes ICE". The original intent for this query is searching for stock quote for ticker ICE. However, we noticed that among the top results, one of the results is "Food quotes: Ice cream". This is matched because of Table 6: Results comparison over the stemmed queries only: column old/new dcg is the dcg score over the affected queries before/after applying stemming the pluralized word "quotes". The unchanged word "ICE" matches part of the noun phrase "ice cream" here. To solve this kind of problem, we have to analyze the documents and recognize "cookie jar" and "ice cream" as concepts instead of two independent words. A third type of mistakes occurs in long queries. For the query "bar code reader software", two words are pluralized. "code" to "codes" and "reader" to "readers". In fact, "bar code reader" in the original query is a strong concept and the internal words should not be changed. This is the segmentation and entity and noun phrase detection problem in queries, which we actively are attacking. For long queries, we should correctly identify the concepts in the query, and boost the proximity for the words within a concept. 6. CONCLUSIONS AND FUTURE WORK We have presented a simple yet elegant way of stemming for Web search. It improves naive stemming in two aspects: selective word expansion on the query side and conservative word occurrence matching on the document side. Using pluralization handling as an example, experiments on a major Web search engine data show it significantly improves the Web relevance and reduces the stemming cost. It also significantly improves Web click through rate (details not reported in the paper). For the future work, we are investigating the problems we identified in the error analysis section. These include: entity and noun phrase matching mistakes, and improved segmentation.
Context Sensitive Stemming for Web Search ABSTRACT Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms. Although it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation. In this paper, we propose a context sensitive stemming method that addresses these two issues. Two unique properties make our approach feasible for Web Search. First, based on statistical language modeling, we perform context sensitive analysis on the query side. We accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine. This dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time. Second, our approach performs a context sensitive document matching for those expanded variants. This conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision. Using word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic. 1. INTRODUCTION Web search has now become a major tool in our daily lives for information seeking. One of the important issues in Web search is that user queries are often not best formulated to get optimal results. For example, "running shoe" is a query that occurs frequently in query logs. However, the query "running shoes" is much more likely to give better search results than the original query because documents matching the intent of this query usually contain the words "running shoes". Correctly formulating a query requires the user to accurately predict which word form is used in the documents that best satisfy his or her information needs. This is difficult even for experienced users, and especially difficult for non-native speakers. One traditional solution is to use stemming [16, 18], the process of transforming inflected or derived words to their root form so that a search term will match and retrieve documents containing all forms of the term. Thus, the word "run" will match "running", "ran", "runs", and "shoe" will match "shoes" and "shoeing". Stemming can be done either on the terms in a document during indexing (and applying the same transformation to the query terms during query processing) or by expanding the query with the variants during query processing. Stemming during indexing allows very little flexibility during query processing, while stemming by query expansion allows handling each query differently, and hence is preferred. Although traditional stemming increases recall by matching word variants [13], it can reduce precision by retrieving too many documents that have been incorrectly matched. When examining the results of applying stemming to a large number of queries, one usually finds that nearly equal numbers of queries are helped and hurt by the technique [6]. In addition, it reduces system performance because the search engine has to match all the word variants. As we will show in the experiments, this is true even if we simplify stemming to pluralization handling, which is the process of converting a word from its plural to singular form, or vice versa. Thus, one needs to be very cautious when using stemming in Web search engines. One problem of traditional stemming is its blind transformation of all query terms, that is, it always performs the same transformation for the same query word without considering the context of the word. For example, the word "book" has four forms "book, books, booking, booked", and "store" has four forms "store, stores, storing, stored". For the query "book store", expanding both words to all of their variants significantly increases computation cost and hurts precision, since not all of the variants are useful for this query. Transforming "book store" to match "book stores" is fine, but matching "book storing" or "booking store" is not. A weighting method that gives variant words smaller weights alleviates the problems to a certain extent if the weights accurately reflect the importance of the variant in this particular query. However uniform weighting is not going to work and a query dependent weighting is still a challenging unsolved problem [20]. A second problem of traditional stemming is its blind matching of all occurrences in documents. For the query "book store", a transformation that allows the variant "stores" to be matched will cause every occurrence of "stores" in the document to be treated equivalent to the query term "store". Thus, a document containing the fragment "reading a book in coffee stores" will be matched, causing many wrong documents to be selected. Although we hope the ranking function can correctly handle these, with many more candidates to rank, the risk of making mistakes increases. To alleviate these two problems, we propose a context sensitive stemming approach for Web search. Our solution consists of two context sensitive analysis, one on the query side and the other on the document side. On the query side, we propose a statistical language modeling based approach to predict which word variants are better forms than the original word for search purpose and expanding the query with only those forms. On the document side, we propose a conservative context sensitive matching for the transformed word variants, only matching document occurrences in the context of other terms in the query. Our model is simple yet effective and efficient, making it feasible to be used in real commercial Web search engines. We use pluralization handling as a running example for our stemming approach. The motivation for using pluralization handling as an example is to show that even such simple stemming, if handled correctly, can give significant benefits to search relevance. As far as we know, no previous research has systematically investigated the usage of pluralization in Web search. As we have to point out, the method we propose is not limited to pluralization handling, it is a general stemming technique, and can also be applied to general query expansion. Experiments on general stemming yield additional significant improvements over pluralization handling for long queries, although details will not be reported in this paper. In the rest of the paper, we first present the related work and distinguish our method from previous work in Section 2. We describe the details of the context sensitive stemming approach in Section 3. We then perform extensive experiments on a major Web search engine to support our claims in Section 4, followed by discussions in Section 5. Finally, we conclude the paper in Section 6. 2. RELATED WORK Stemming is a long studied technology. Many stemmers have been developed, such as the Lovins stemmer [16] and the Porter stemmer [18]. The Porter stemmer is widely used due to its simplicity and effectiveness in many applications. However, the Porter stemming makes many mistakes because its simple rules cannot fully describe English morphology. Corpus analysis is used to improve Porter stemmer [26] by creating equivalence classes for words that are morphologically similar and occur in similar context as measured by expected mutual information [23]. We use a similar corpus based approach for stemming by computing the similarity between two words based on their distributional context features which can be more than just adjacent words [15], and then only keep the morphologically similar words as candidates. Using stemming in information retrieval is also a well known technique [8, 10]. However, the effectiveness of stemming for English query systems was previously reported to be rather limited. Lennon et al. [17] compared the Lovins and Porter algorithms and found little improvement in retrieval performance. Later, Harman [9] compares three general stemming techniques in text retrieval experiments including pluralization handing (called S stemmer in the paper). They also proposed selective stemming based on query length and term importance, but no positive results were reported. On the other hand, Krovetz [14] performed comparisons over small numbers of documents (from 400 to 12k) and showed dramatic precision improvement (up to 45%). However, due to the limited number of tested queries (less than 100) and the small size of the collection, the results are hard to generalize to Web search. These mixed results, mostly failures, led early IR researchers to deem stemming irrelevant in general for English [4], although recent research has shown stemming has greater benefits for retrieval in other languages [2]. We suspect the previous failures were mainly due to the two problems we mentioned in the introduction. Blind stemming, or a simple query length based selective stemming as used in [9] is not enough. Stemming has to be decided on case by case basis, not only at the query level but also at the document level. As we will show, if handled correctly, significant improvement can be achieved. A more general problem related to stemming is query reformulation [3, 12] and query expansion which expands words not only with word variants [7, 22, 24, 25]. To decide which expanded words to use, people often use pseudorelevance feedback techniquesthat send the original query to a search engine and retrieve the top documents, extract relevant words from these top documents as additional query words, and resubmit the expanded query again [21]. This normally requires sending a query multiple times to search engine and it is not cost effective for processing the huge amount of queries involved in Web search. In addition, query expansion, including query reformulation [3, 12], has a high risk of changing the user intent (called query drift). Since the expanded words may have different meanings, adding them to the query could potentially change the intent of the original query. Thus query expansion based on pseudorelevance and query reformulation can provide suggestions to users for interactive refinement but can hardly be directly used for Web search. On the other hand, stemming is much more conservative since most of the time, stemming preserves the original search intent. While most work on query expansion focuses on recall enhancement, our work focuses on increasing both recall and precision. The increase on recall is obvious. With quality stemming, good documents which were not selected before stemming will be pushed up and those low quality documents will be degraded. On selective query expansion, Cronen-Townsend et al. [6] proposed a method for selective query expansion based on comparing the Kullback-Leibler divergence of the results from the unexpanded query and the results from the expanded query. This is similar to the relevance feedback in the sense that it requires multiple passes retrieval. If a word can be expanded into several words, it requires running this process multiple times to decide which expanded word is useful. It is expensive to deploy this in production Web search engines. Our method predicts the quality of expansion based on offline information without sending the query to a search engine. In summary, we propose a novel approach to attack an old, yet still important and challenging problem for Web search--stemming. Our approach is unique in that it performs predictive stemming on a per query basis without relevance feedback from the Web, using the context of the variants in documents to preserve precision. It's simple, yet very efficient and effective, making real time stemming feasible for Web search. Our results will affirm researchers that stemming is indeed very important to large scale information retrieval. 3. CONTEXT SENSITIVE STEMMING 3.1 Overview 3.2 Expansion candidate generation 3.3 Segmentation and headword identification 3.4 Context sensitive word expansion 3.5 Context sensitive document matching 4. EXPERIMENTAL EVALUATION 4.1 Evaluation metrics 4.1.1 Relevance measurement 4.1.2 Stemming cost 4.2 Data preparation 4.3 Naive stemming for Web search 4.4 Experimental setup 4.5 Results 5. DISCUSSIONS 5.1 Language models from query vs. from Web 5.2 How linguistics helps 5.3 Error analysis 6. CONCLUSIONS AND FUTURE WORK We have presented a simple yet elegant way of stemming for Web search. It improves naive stemming in two aspects: selective word expansion on the query side and conservative word occurrence matching on the document side. Using pluralization handling as an example, experiments on a major Web search engine data show it significantly improves the Web relevance and reduces the stemming cost. It also significantly improves Web click through rate (details not reported in the paper). For the future work, we are investigating the problems we identified in the error analysis section. These include: entity and noun phrase matching mistakes, and improved segmentation.
Context Sensitive Stemming for Web Search ABSTRACT Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms. Although it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation. In this paper, we propose a context sensitive stemming method that addresses these two issues. Two unique properties make our approach feasible for Web Search. First, based on statistical language modeling, we perform context sensitive analysis on the query side. We accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine. This dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time. Second, our approach performs a context sensitive document matching for those expanded variants. This conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision. Using word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic. 1. INTRODUCTION Web search has now become a major tool in our daily lives for information seeking. One of the important issues in Web search is that user queries are often not best formulated to get optimal results. For example, "running shoe" is a query that occurs frequently in query logs. However, the query "running shoes" is much more likely to give better search results than the original query because documents matching the intent of this query usually contain the words "running shoes". Correctly formulating a query requires the user to accurately predict which word form is used in the documents that best satisfy his or her information needs. One traditional solution is to use stemming [16, 18], the process of transforming inflected or derived words to their root form so that a search term will match and retrieve documents containing all forms of the term. Stemming can be done either on the terms in a document during indexing (and applying the same transformation to the query terms during query processing) or by expanding the query with the variants during query processing. Stemming during indexing allows very little flexibility during query processing, while stemming by query expansion allows handling each query differently, and hence is preferred. Although traditional stemming increases recall by matching word variants [13], it can reduce precision by retrieving too many documents that have been incorrectly matched. When examining the results of applying stemming to a large number of queries, one usually finds that nearly equal numbers of queries are helped and hurt by the technique [6]. In addition, it reduces system performance because the search engine has to match all the word variants. As we will show in the experiments, this is true even if we simplify stemming to pluralization handling, which is the process of converting a word from its plural to singular form, or vice versa. Thus, one needs to be very cautious when using stemming in Web search engines. One problem of traditional stemming is its blind transformation of all query terms, that is, it always performs the same transformation for the same query word without considering the context of the word. For the query "book store", expanding both words to all of their variants significantly increases computation cost and hurts precision, since not all of the variants are useful for this query. A weighting method that gives variant words smaller weights alleviates the problems to a certain extent if the weights accurately reflect the importance of the variant in this particular query. However uniform weighting is not going to work and a query dependent weighting is still a challenging unsolved problem [20]. A second problem of traditional stemming is its blind matching of all occurrences in documents. For the query "book store", a transformation that allows the variant "stores" to be matched will cause every occurrence of "stores" in the document to be treated equivalent to the query term "store". Thus, a document containing the fragment "reading a book in coffee stores" will be matched, causing many wrong documents to be selected. To alleviate these two problems, we propose a context sensitive stemming approach for Web search. Our solution consists of two context sensitive analysis, one on the query side and the other on the document side. On the query side, we propose a statistical language modeling based approach to predict which word variants are better forms than the original word for search purpose and expanding the query with only those forms. On the document side, we propose a conservative context sensitive matching for the transformed word variants, only matching document occurrences in the context of other terms in the query. Our model is simple yet effective and efficient, making it feasible to be used in real commercial Web search engines. We use pluralization handling as a running example for our stemming approach. The motivation for using pluralization handling as an example is to show that even such simple stemming, if handled correctly, can give significant benefits to search relevance. As far as we know, no previous research has systematically investigated the usage of pluralization in Web search. As we have to point out, the method we propose is not limited to pluralization handling, it is a general stemming technique, and can also be applied to general query expansion. Experiments on general stemming yield additional significant improvements over pluralization handling for long queries, although details will not be reported in this paper. In the rest of the paper, we first present the related work and distinguish our method from previous work in Section 2. We describe the details of the context sensitive stemming approach in Section 3. We then perform extensive experiments on a major Web search engine to support our claims in Section 4, followed by discussions in Section 5. Finally, we conclude the paper in Section 6. 2. RELATED WORK Stemming is a long studied technology. The Porter stemmer is widely used due to its simplicity and effectiveness in many applications. However, the Porter stemming makes many mistakes because its simple rules cannot fully describe English morphology. Using stemming in information retrieval is also a well known technique [8, 10]. However, the effectiveness of stemming for English query systems was previously reported to be rather limited. Later, Harman [9] compares three general stemming techniques in text retrieval experiments including pluralization handing (called S stemmer in the paper). They also proposed selective stemming based on query length and term importance, but no positive results were reported. However, due to the limited number of tested queries (less than 100) and the small size of the collection, the results are hard to generalize to Web search. We suspect the previous failures were mainly due to the two problems we mentioned in the introduction. Blind stemming, or a simple query length based selective stemming as used in [9] is not enough. Stemming has to be decided on case by case basis, not only at the query level but also at the document level. A more general problem related to stemming is query reformulation [3, 12] and query expansion which expands words not only with word variants [7, 22, 24, 25]. This normally requires sending a query multiple times to search engine and it is not cost effective for processing the huge amount of queries involved in Web search. In addition, query expansion, including query reformulation [3, 12], has a high risk of changing the user intent (called query drift). Since the expanded words may have different meanings, adding them to the query could potentially change the intent of the original query. Thus query expansion based on pseudorelevance and query reformulation can provide suggestions to users for interactive refinement but can hardly be directly used for Web search. On the other hand, stemming is much more conservative since most of the time, stemming preserves the original search intent. While most work on query expansion focuses on recall enhancement, our work focuses on increasing both recall and precision. The increase on recall is obvious. With quality stemming, good documents which were not selected before stemming will be pushed up and those low quality documents will be degraded. On selective query expansion, Cronen-Townsend et al. [6] proposed a method for selective query expansion based on comparing the Kullback-Leibler divergence of the results from the unexpanded query and the results from the expanded query. This is similar to the relevance feedback in the sense that it requires multiple passes retrieval. If a word can be expanded into several words, it requires running this process multiple times to decide which expanded word is useful. It is expensive to deploy this in production Web search engines. Our method predicts the quality of expansion based on offline information without sending the query to a search engine. In summary, we propose a novel approach to attack an old, yet still important and challenging problem for Web search--stemming. Our approach is unique in that it performs predictive stemming on a per query basis without relevance feedback from the Web, using the context of the variants in documents to preserve precision. It's simple, yet very efficient and effective, making real time stemming feasible for Web search. Our results will affirm researchers that stemming is indeed very important to large scale information retrieval. 6. CONCLUSIONS AND FUTURE WORK We have presented a simple yet elegant way of stemming for Web search. It improves naive stemming in two aspects: selective word expansion on the query side and conservative word occurrence matching on the document side. Using pluralization handling as an example, experiments on a major Web search engine data show it significantly improves the Web relevance and reduces the stemming cost. It also significantly improves Web click through rate (details not reported in the paper). For the future work, we are investigating the problems we identified in the error analysis section. These include: entity and noun phrase matching mistakes, and improved segmentation.
H-47
A Semantic Approach to Contextual Advertising
Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query. In CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience. With these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads. The SS market developed quicker than the CM market, and most textual ads are still characterized by bid phrases representing those queries where the advertisers would like to have their ad displayed. Hence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach. However, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads. To overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features.
[ "semant", "contextu advertis", "contextu advertis", "match", "ad relev", "pai-per-click", "match mechan", "semant-syntact match", "keyword match", "hierarch taxonomi class", "document classifi", "top-k ad", "topic distanc" ]
[ "P", "P", "P", "P", "P", "U", "M", "M", "M", "U", "U", "M", "U" ]
A Semantic Approach to Contextual Advertising Andrei Broder Marcus Fontoura Vanja Josifovski Lance Riedel Yahoo! Research, 2821 Mission College Blvd, Santa Clara, CA 95054 {broder, marcusf, vanjaj, riedell}@yahoo-inc.com ABSTRACT Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query. In CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience. With these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads. The SS market developed quicker than the CM market, and most textual ads are still characterized by bid phrases representing those queries where the advertisers would like to have their ad displayed. Hence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach. However, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads. To overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features. Categories and Subject Descriptors: H.3.3 [Information Storage and Retrieval]: Selection process General Terms: Algorithms, Measurement, Performance, Experimentation 1. INTRODUCTION Web advertising supports a large swath of today``s Internet ecosystem. The total internet advertiser spend in US alone in 2006 is estimated at over 17 billion dollars with a growth rate of almost 20% year over year. A large part of this market consists of textual ads, that is, short text messages usually marked as sponsored links or similar. The main advertising channels used to distribute textual ads are: 1. Sponsored Search or Paid Search advertising which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query. All major current web search engines (Google, Yahoo!, and Microsoft) support such ads and act simultaneously as a search engine and an ad agency. 2. Contextual advertising or Context Match which refers to the placement of commercial ads within the content of a generic web page. In contextual advertising usually there is a commercial intermediary, called an ad-network, in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between publisher and ad-network) and improving user experience. Again, all major current web search engines (Google, Yahoo!, and Microsoft) provide such ad-networking services but there are also many smaller players. The SS market developed quicker than the CM market, and most textual ads are still characterized by bid phrases representing those queries where the advertisers would like to have their ad displayed. (See [5] for a brief history). However, today, almost all of the for-profit non-transactional web sites (that is, sites that do not sell anything directly) rely at least in part on revenue from context match. CM supports sites that range from individual bloggers and small niche communities to large publishers such as major newspapers. Without this model, the web would be a lot smaller! The prevalent pricing model for textual ads is that the advertisers pay a certain amount for every click on the advertisement (pay-per-click or PPC). There are also other models used: pay-per-impression, where the advertisers pay for the number of exposures of an ad and pay-per-action where the advertiser pays only if the ad leads to a sale or similar transaction. For simplicity, we only deal with the PPC model in this paper. Given a page, rather than placing generic ads, it seems preferable to have ads related to the content to provide a better user experience and thus to increase the probability of clicks. This intuition is supported by the analogy to conventional publishing where there are very successful magazines (e.g. Vogue) where a majority of the content is topical advertising (fashion in the case of Vogue) and by user studies that have confirmed that increased relevance increases the number of ad-clicks [4, 13]. Previous published approaches estimated the ad relevance based on co-occurrence of the same words or phrases within the ad and within the page (see [7, 8] and Section 3 for more details). However targeting mechanisms based solely on phrases found within the text of the page can lead to problems: For example, a page about a famous golfer named John Maytag might trigger an ad for Maytag dishwashers since Maytag is a popular brand. Another example could be a page describing the Chevy Tahoe truck (a popular vehicle in US) triggering an ad about Lake Tahoe vacations. Polysemy is not the only culprit: there is a (maybe apocryphal) story about a lurid news item about a headless body found in a suitcase triggering an ad for Samsonite luggage! In all these examples the mismatch arises from the fact that the ads are not appropriate for the context. In order to solve this problem we propose a matching mechanism that combines a semantic phase with the traditional keyword matching, that is, a syntactic phase. The semantic phase classifies the page and the ads into a taxonomy of topics and uses the proximity of the ad and page classes as a factor in the ad ranking formula. Hence we favor ads that are topically related to the page and thus avoid the pitfalls of the purely syntactic approach. Furthermore, by using a hierarchical taxonomy we allow for the gradual generalization of the ad search space in the case when there are no ads matching the precise topic of the page. For example if the page is about an event in curling, a rare winter sport, and contains the words Alpine Meadows, the system would still rank highly ads for skiing in Alpine Meadows as these ads belong to the class skiing which is a sibling of the class curling and both of these classes share the parent winter sports. In some sense, the taxonomy classes are used to select the set of applicable ads and the keywords are used to narrow down the search to concepts that are of too small granularity to be in the taxonomy. The taxonomy contains nodes for topics that do not change fast, for example, brands of digital cameras, say Canon. The keywords capture the specificity to a level that is more dynamic and granular. In the digital camera example this would correspond to the level of a particular model, say Canon SD450 whose advertising life might be just a few months. Updating the taxonomy with new nodes or even new vocabulary each time a new model comes to the market is prohibitively expensive when we are dealing with millions of manufacturers. In addition to increased click through rate (CTR) due to increased relevance, a significant but harder to quantify benefit of the semantic-syntactic matching is that the resulting page has a unified feel and improves the user experience. In the Chevy Tahoe example above, the classifier would establish that the page is about cars/automotive and only those ads will be considered. Even if there are no ads for this particular Chevy model, the chosen ads will still be within the automotive domain. To implement our approach we need to solve a challenging problem: classify both pages and ads within a large taxonomy (so that the topic granularity would be small enough) with high precision (to reduce the probability of mis-match). We evaluated several classifiers and taxonomies and in this paper we present results using a taxonomy with close to 6000 nodes using a variation of the Rocchio``s classifier [9]. This classifier gave the best results in both page and ad classification, and ultimately in ad relevance. The paper proceeds as follows. In the next section we review the basic principles of the contextual advertising. Section 3 overviews the related work. Section 4 describes the taxonomy and document classifier that were used for page and ad classification. Section 5 describes the semanticsyntactic method. In Section 6 we briefly discuss how to search efficiently the ad space in order to return the top-k ranked ads. Experimental evaluation is presented in Section 7. Finally, Section 8 presents the concluding remarks. 2. OVERVIEW OF CONTEXTUAL ADVERTISING Contextual advertising is an interplay of four players: • The publisher is the owner of the web pages on which the advertising is displayed. The publisher typically aims to maximize advertising revenue while providing a good user experience. • The advertiser provides the supply of ads. Usually the activity of the advertisers are organized around campaigns which are defined by a set of ads with a particular temporal and thematic goal (e.g. sale of digital cameras during the holiday season). As in traditional advertising, the goal of the advertisers can be broadly defined as the promotion of products or services. • The ad network is a mediator between the advertiser and the publisher and selects the ads that are put on the pages. The ad-network shares the advertisement revenue with the publisher. • Users visit the web pages of the publisher and interact with the ads. Contextual advertising usually falls into the category of direct marketing (as opposed to brand advertising), that is advertising whose aim is a direct response where the effect of an campaign is measured by the user reaction. One of the advantages of online advertising in general and contextual advertising in particular is that, compared to the traditional media, it is relatively easy to measure the user response. Usually the desired immediate reaction is for the user to follow the link in the ad and visit the advertiser``s web site and, as noted, the prevalent financial model is that the advertiser pays a certain amount for every click on the advertisement (PPC). The revenue is shared between the publisher and the network. Context match advertising has grown from Sponsored Search advertising, which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query. In most networks, the amount paid by the advertiser for each SS click is determined by an auction process where the advertisers place bids on a search phrase, and their position in the tower of ads displayed in conjunction with the result is determined by their bid. Thus each ad is annotated with one or more bid phrases. The bid phrase has no direct bearing on the ad placement in CM. However, it is a concise description of target ad audience as determined by the advertiser and it has been shown to be an important feature for successful CM ad placement [8]. In addition to the bid phrase, an ad is also characterized by a title usually displayed in a bold font, and an abstract or creative, which is the few lines of text, usually less than 120 characters, displayed on the page. The ad-network model aligns the interests of the publishers, advertisers and the network. In general, clicks bring benefits to both the publisher and the ad network by providing revenue, and to the advertiser by bringing traffic to the target web site. The revenue of the network, given a page p, can be estimated as: R = X i=1. . k P(click|p, ai)price(ai, i) where k is the number of ads displayed on page p and price(ai, i) is the click-price of the current ad ai at position i. The price in this model depends on the set of ads presented on the page. Several models have been proposed to determine the price, most of them based on generalizations of second price auctions. However, for simplicity we ignore the pricing model and concentrate on finding ads that will maximize the first term of the product, that is we search for arg max i P(click|p, ai) Furthermore we assume that the probability of click for a given ad and page is determined by its relevance score with respect to the page, thus ignoring the positional effect of the ad placement on the page. We assume that this is an orthogonal factor to the relevance component and could be easily incorporated in the model. 3. RELATED WORK Online advertising in general and contextual advertising in particular are emerging areas of research. The published literature is very sparse. A study presented in [13] confirms the intuition that ads need to be relevant to the user``s interest to avoid degrading the user``s experience and increase the probability of reaction. A recent work by Ribeiro-Neto et. al. [8] examines a number of strategies to match pages to ads based on extracted keywords. The ads and pages are represented as vectors in a vector space. The first five strategies proposed in that work match the pages and the ads based on the cosine of the angle between the ad vector and the page vector. To find out the important part of the ad, the authors explore using different ad sections (bid phrase, title, body) as a basis for the ad vector. The winning strategy out of the first five requires the bid phrase to appear on the page and then ranks all such ads by the cosine of the union of all the ad sections and the page vectors. While both pages and ads are mapped to the same space, there is a discrepancy (impendence mismatch) between the vocabulary used in the ads and in the pages. Furthermore, since in the vector model the dimensions are determined by the number of unique words, plain cosine similarity will not take into account synonyms. To solve this problem, Ribeiro-Neto et al expand the page vocabulary with terms from other similar pages weighted based on the overall similarity of the origin page to the matched page, and show improved matching precision. In a follow-up work [7] the authors propose a method to learn impact of individual features using genetic programming to produce a matching function. The function is represented as a tree composed of arithmetic operators and the log function as internal nodes, and different numerical features of the query and ad terms as leafs. The results show that genetic programming finds matching functions that significantly improve the matching compared to the best method (without page side expansion) reported in [8]. Another approach to contextual advertising is to reduce it to the problem of sponsored search advertising by extracting phrases from the page and matching them with the bid phrase of the ads. In [14] a system for phrase extraction is described that used a variety of features to determine the importance of page phrases for advertising purposes. The system is trained with pages that have been hand annotated with important phrases. The learning algorithm takes into account features based on tf-idf, html meta data and query logs to detect the most important phrases. During evaluation, each page phrase up to length 5 is considered as potential result and evaluated against a trained classifier. In our work we also experimented with a phrase extractor based on the work reported in [12]. While increasing slightly the precision, it did not change the relative performance of the explored algorithms. 4. PAGE AND AD CLASSIFICATION 4.1 Taxonomy Choice The semantic match of the pages and the ads is performed by classifying both into a common taxonomy. The matching process requires that the taxonomy provides sufficient differentiation between the common commercial topics. For example, classifying all medical related pages into one node will not result into a good classification since both sore foot and flu pages will end up in the same node. The ads suitable for these two concepts are, however, very different. To obtain sufficient resolution, we used a taxonomy of around 6000 nodes primarily built for classifying commercial interest queries, rather than pages or ads. This taxonomy has been commercially built by Yahoo! US. We will explain below how we can use the same taxonomy to classify pages and ads as well. Each node in our source taxonomy is represented as a collection of exemplary bid phrases or queries that correspond to that node concept. Each node has on average around 100 queries. The queries placed in the taxonomy are high volume queries and queries of high interest to advertisers, as indicated by an unusually high cost-per-click (CPC) price. The taxonomy has been populated by human editors using keyword suggestions tools similar to the ones used by ad networks to suggest keywords to advertisers. After initial seeding with a few queries, using the provided tools a human editor can add several hundreds queries to a given node. Nevertheless, it has been a significant effort to develop this 6000-nodes taxonomy and it has required several person-years of work. A similar-in-spirit process for building enterprise taxonomies via queries has been presented in [6]. However, the details and tools are completely different. Figure 1 provides some statistics about the taxonomy used in this work. 4.2 Classification Method As explained, the semantic phase of the matching relies on ads and pages being topically close. Thus we need to classify pages into the same taxonomy used to classify ads. In this section we overview the methods we used to build a page and an ad classifier pair. The detailed description and evaluation of this process is outside the scope of this paper. Given the taxonomy of queries (or bid-phrases - we use these terms interchangeably) described in the previous section, we tried three methods to build corresponding page and ad classifiers. For the first two methods we tried to find exemplary pages and ads for each concept as follows: Number of Categories By Level 0 200 400 600 800 1000 1200 1400 1600 1800 2000 1 2 3 4 5 6 7 8 9 Level NumberofCategories Number of Children per Nodes 0 50 100 150 200 250 300 350 400 0 2 4 6 8 10 12 14 16 18 20 22 24 29 31 33 35 52 Number of Children NumberofNodes Queries per Node 0 500 1000 1500 2000 2500 3000 1 50 80 120 160 200 240 280 320 360 400 440 480 Number Queries (up to 500+) NumberofNodes Figure 1: Taxonomy statistics: categories per level; fanout for non-leaf nodes; and queries per node We generated a page training set by running the queries in the taxonomy over a Web search index and using the top 10 results after some filtering as documents labeled with the query``s label. On the ad side we generated a training set for each class by selecting the ads that have a bid phrase assigned to this class. Using this training sets we then trained a hierarchical SVM [2] (one against all between every group of siblings) and a log-regression [11] classifier. (The second method differs from the first in the type of secondary filtering used. This filtering eliminates low content pages, pages deemed unsuitable for advertising, pages that lead to excessive class confusion, etc.) However, we obtained the best performance by using the third document classifier, based on the information in the source taxonomy queries only. For each taxonomy node we concatenated all the exemplary queries into a single metadocument. We then used the meta document as a centroid for a nearest-neighbor classifier based on Rocchio``s framework [9] with only positive examples and no relevance feedback. Each centroid is defined as a sum of the tf-idf values of each term, normalized by the number of queries in the class cj = 1 |Cj| X q∈Cj q q where cj is the centroid for class Cj; q iterates over the queries in a particular class. The classification is based on the cosine of the angle between the document d and the centroid meta-documents: Cmax = arg max Cj ∈C cj cj · d d = arg max Cj ∈C P i∈|F | ci j· di qP i∈|F |(ci j)2 qP i∈|F |(di)2 where F is the set of features. The score is normalized by the document and class length to produce comparable score. The terms ci and di represent the weight of the ith feature in the class centroid and the document respectively. These weights are based on the standard tf-idf formula. As the score of the max class is normalized with regard to document length, the scores for different documents are comparable. We conducted tests using professional editors to judge the quality of page and ad class assignments. The tests showed that for both ads and pages the Rocchio classifier returned the best results, especially on the page side. This is probably a result of the noise induced by using a search engine to machine generate training pages for the SVM and logregression classifiers. It is an area of current investigation how to improve the classification using a noisy training set. Based on the test results we decided to use the Rocchio``s classifier on both the ad and the page side. 5. SEMANTIC-SYNTACTIC MATCHING Contextual advertising systems process the content of the page, extract features, and then search the ad space to find the best matching ads. Given a page p and a set of ads A = {a1 ... as} we estimate the relative probability of click P(click|p, a) with a score that captures the quality of the match between the page and the ad. To find the best ads for a page we rank the ads in A and select the top few for display. The problem can be formally defined as matching every page in the set of all pages P = {p1, ... ppc} to one or more ads in the set of ads. Each page is represented as a set of page sections pi = {pi,1, pi,2 ... pi,m}. The sections of the page represent different structural parts, such as title, metadata, body, headings, etc.. In turn, each section is an unordered bag of terms (keywords). A page is represented by the union of the terms in each section: pi = {pws1 1 , pws1 2 ... pwsi m} where pw stands for a page word and the superscript indicates the section of each term. A term can be a unigram or a phrase extracted by a phrase extractor [12]. Similarly, we represent each ad as a set of sections a = {a1, a2, ... al}, each section in turn being an unordered set of terms: ai = {aws1 1 , aws1 2 ... awsj l } where aw is an ad word. The ads in our experiments have 3 sections: title, body, and bid phrase. In this work, to produce the match score we use only the ad/page textual information, leaving user information and other data for future work. Next, each page and ad term is associated with a weight based on the tf-idf values. The tf value is determined based on the individual ad sections. There are several choices for the value of idf, based on different scopes. On the ad side, it has been shown in previous work that the set of ads of one campaign provide good scope for the estimation of idf that leads to improved matching results [8]. However, in this work for simplicity we do not take into account campaigns. To combine the impact of the term``s section and its tf-idf score, the ad/page term weight is defined as: tWeight(kwsi ) = weightSection(Si) · tf idf(kw) where tWeight stands for term weight and weightSection(Si) is the weight assigned to a page or ad section. This weight is a fixed parameter determined by experimentation. Each ad and page is classified into the topical taxonomy. We define these two mappings: Tax(pi) = {pci1, ... pciu} Tax(aj) = {acj1 ... acjv} where pc and ac are page and ad classes correspondingly. Each assignment is associated with a weight given by the classifier. The weights are normalized to sum to 1: X c∈T ax(xi) cWeight(c) = 1 where xi is either a page or an ad, and cWeights(c) is the class weight - normalized confidence assigned by the classifier. The number of classes can vary between different pages and ads. This corresponds to the number of topics a page/ad can be associated with and is almost always in the range 1-4. Now we define the relevance score of an ad ai and page pi as a convex combination of the keyword (syntactic) and classification (semantic) score: Score(pi, ai) = α · TaxScore(Tax(pi), Tax(ai)) +(1 − α) · KeywordScore(pi, ai) The parameter α determines the relative weight of the taxonomy score and the keyword score. To calculate the keyword score we use the vector space model [1] where both the pages and ads are represented in n-dimensional space - one dimension for each distinct term. The magnitude of each dimension is determined by the tWeight() formula. The keyword score is then defined as the cosine of the angle between the page and the ad vectors: KeywordScore(pi, ai) = P i∈|K| tWeight(pwi)· tWeight(awi) qP i∈|K|(tWeight(pwi))2 qP i∈|K|(tWeight(awi))2 where K is the set of all the keywords. The formula assumes independence between the words in the pages and ads. Furthermore, it ignores the order and the proximity of the terms in the scoring. We experimented with the impact of phrases and proximity on the keyword score and did not see a substantial impact of these two factors. We now turn to the definition of the TaxScore. This function indicates the topical match between a given ad and a page. As opposed to the keywords that are treated as independent dimensions, here the classes (topics) are organized into a hierarchy. One of the goals in the design of the TaxScore function is to be able to generalize within the taxonomy, that is accept topically related ads. Generalization can help to place ads in cases when there is no ad that matches both the category and the keywords of the page. The example in Figure 2 illustrates this situation. In this example, in the absence of ski ads, a page about skiing containing the word Atomic could be matched to the available snowboarding ad for the same brand. In general we would like the match to be stronger when both the ad and the page are classified into the same node Figure 2: Two generalization paths and weaker when the distance between the nodes in the taxonomy gets larger. There are multiple ways to specify the distance between two taxonomy nodes. Besides the above requirement, this function should lend itself to an efficient search of the ad space. Given a page we have to find the ad in a few milliseconds, as this impacts the presentation to a waiting user. This will be further discussed in the next section. To capture both the generalization and efficiency requirements we define the TaxScore function as follows: TaxScore(PC, AC) = X pc∈P C X ac∈AC idist(LCA(pc, ac), ac)·cWeight(pc)·cWeight(ac) In this function we consider every combination of page class and ad class. For each combination we multiply the product of the class weights with the inverse distance function between the least common ancestor of the two classes (LCA) and the ad class. The inverse distance function idist(c1, c2) takes two nodes on the same path in the class taxonomy and returns a number in the interval [0, 1] depending on the distance of the two class nodes. It returns 1 if the two nodes are the same, and declines toward 0 when LCA(pc, ac) or ac is the root of the taxonomy. The rate of decline determines the weight of the generalization versus the other terms in the scoring formula. To determine the rate of decline we consider the impact on the specificity of the match when we substitute a class with one of its ancestors. In general the impact is topic dependent. For example the node Hobby in our taxonomy has tens of children, each representing a different hobby, two examples being Sailing and Knitting. Placing an ad about Knitting on a page about Sailing does not make lots of sense. However, in the Winter Sports example above, in the absence of better alternative, skiing ads could be put on snowboarding pages as they might promote the same venues, equipment vendors etc.. Such detailed analysis on a case by case basis is prohibitively expensive due to the size of the taxonomy. One option is to use the confidences of the ancestor classes as given by the classifier. However we found these numbers not suitable for this purpose as the magnitude of the confidences does not necessarily decrease when going up the tree. Another option is to use explore-exploit methods based on machine-learning from the click feedback as described in [10]. However for simplicity, in this work we chose a simple heuristic to determine the cost of generalization from a child to a parent. In this heuristic we look at the broadening of the scope when moving from a child to a parent. We estimate the broadening by the density of ads classified in the parent nodes vs the child node. The density is obtained by classifying a large set of ads in the taxonomy using the document classifier described above. Based on this, let nc be the number of document classified into the subtree rooted at c. Then we define: idist(c, p) = nc np where c represents the child node and p is the parent node. This fraction can be viewed as a probability of an ad belonging to the parent topic being suitable for the child topic. 6. SEARCHING THE AD SPACE In the previous section we discussed the choice of scoring function to estimate the match between an ad and a page. The top-k ads with the highest score are offered by the system for placement on the publisher``s page. The process of score calculation and ad selection is to be done in real time and therefore must be very efficient. As the ad collections are in the range of hundreds of millions of entries, there is a need for indexed access to the ads. Inverted indexes provide scalable and low latency solutions for searching documents. However, these have been traditionally used to search based on keywords. To be able to search the ads on a combination of keywords and classes we have mapped the classification match to term match and adapted the scoring function to be suitable for fast evaluation over inverted indexes. In this section we overview the ad indexing and the ranking function of our prototype ad search system for matching ads and pages. We used a standard inverted index framework where there is one posting list for each distinct term. The ads are parsed into terms and each term is associated with a weight based on the section in which it appears. Weights from distinct occurrences of a term in an ad are added together, so that the posting lists contain one entry per term/ad combination. The next challenge is how to index the ads so that the class information is preserved in the index? A simple method is to create unique meta-terms for the classes and annotate each ad with one meta-term for each assigned class. However this method does not allow for generalization, since only the ads matching an exact label of the page would be selected. Therefore we annotated each ad with one meta-term for each ancestor of the assigned class. The weights of meta-terms are assigned according to the value of the idist() function defined in the previous section. On the query side, given the keywords and the class of a page, we compose a keyword only query by inserting one class term for each ancestor of the classes assigned to the page. The scoring function is adapted to the two part scoreone for the class meta-terms and another for the text term. During the class score calculation, for each class path we use only the lowest class meta-term, ignoring the others. For example, if an ad belongs to the Skiing class and is annotated with both Skiing and its parent Winter Sports, the index will contain the special class meta-terms for both Skiing and Winter Sports (and all their ancestors) with the weights according to the product of the classifier confidence and the idist function. When matching with a page classified into Skiing, the query will contain terms for class Skiing and for each of its ancestors. However when scoring an ad classified into Skiing we will use the weight for the Skiing class meta-term. Ads classified into Snowboarding will be scored using the weight of the Winter Sports meta-term. To make this check efficiently we keep a sorted list of all the class paths and, at scoring time, we search the paths bottom up for a meta-term appearing in the ad. The first meta-term is used for scoring, the rest are ignored. At runtime, we evaluate the query using a variant of the WAND algorithm [3]. This is a document-at-a-time algorithm [1] that uses a branch-and-bound approach to derive efficient moves for the cursors associated to the postings lists. It finds the next cursor to be moved based on an upper bound of the score for the documents at which the cursors are currently positioned. The algorithm keeps a heap of current best candidates. Documents with an upper bound smaller than the current minimum score among the candidate documents can be eliminated from further considerations, and thus the cursors can skip over them. To find the upper bound for a document, the algorithm assumes that all cursors that are before it will hit this document (i.e. the document contains all those terms represented by cursors before or at that document). It has been shown that WAND can be used with any function that is monotonic with respect to the number of matching terms in the document. Our scoring function is monotonic since the score can never decrease when more terms are found in the document. In the special case when we add a cursor representing an ancestor of a class term already factored in the score, the score simply does not change (we add 0). Given these properties, we use an adaptation of the WAND algorithm where we change the calculation of the scoring function and the upper bound score calculation to reflect our scoring function. The rest of the algorithm remains unchanged. 7. EXPERIMENTAL EVALUATION 7.1 Data and Methodology We quantify the effect of the semantic-syntactic matching using a set of 105 pages. This set of pages was selected by a random sample of a larger set of around 20 million pages with contextual advertising. Ads for each of these pages have been selected from a larger pool of ads (tens of millions) by previous experiments conducted by Yahoo! US for other purposes. Each page-ad pair has been judged by three or more human judges on a 1 to 3 scale: 1. Relevant The ad is semantically directly related to the main subject of the page. For example if the page is about the National Football League and the ad is about tickets for NFL games, it would be scored as 1. 2. Somewhat relevant The ad is related to the secondary subject of the page, or is related to the main topic of the page in a general way. In the NFL page example, an ad about NFL branded products would be judged as 2. 3. Irrelevant The ad is unrelated to the page. For example a mention of the NFL player John Maytag triggers washing machine ads on a NFL page. pages 105 words per page 868 judgments 2946 judg. inter-editor agreement 84% unique ads 2680 unique ads per page 25.5 page classification precision 70% ad classification precision 86% Table 1: Dataset statistics To obtain a score for a page-ad pair we average all the scores and then round to the closest integer. We then used these judgments to evaluate how well our methods distinguish the positive and the negative ad assignments for each page. The statistics of the page dataset is given in Table 1. The original experiments that paired the pages and the ads are loosely related to the syntactic keyword based matching and classification based assignment but used different taxonomies and keyword extraction techniques. Therefore we could not use standard pooling as an evaluation method since we did not control the way the pairs were selected and could not precisely establish the set of ads from which the placed ads were selected. Instead, in our evaluation for each page we consider only those ads for which we have judgment. Each different method was applied to this set and the ads were ranked by the score. The relative effectiveness of the algorithms were judged by comparing how well the methods separated the ads with positive judgment from the ads with negative judgment. We present precision on various levels of recall within this set. As the set of ads per page is relatively small, the evaluation reports precision that is higher than it would be with a larger set of negative ads. However, these numbers still establish the relative performance of the algorithms and we can use it to evaluate performance at different score thresholds. In addition to the precision-recall over the judged ads, we also present Kendall``s τ rank correlation coefficient to establish how far from the perfect ordering are the orderings produced by our ranking algorithms. For this test we ranked the judged ads by the scores assigned by the judges and then compared this order to the order assigned by our algorithms. Finally we also examined the precision at position 1, 3 and 5. 7.2 Results Figure 3 shows the precision recall curves for the syntactic matching (keywords only used) vs. a syntactic-semantic matching with the optimal value of α = 0.8 (judged by the 11-point score [1]). In this figure, we assume that the adpage pairs judged with 1 or 2 are positive examples and the 3s are negative examples. We also examined counting only the pairs judged with 1 as positive examples and did not find a significant change in the relative performance of the tested methods. Overlaid are also results using phrases in the keyword match. We see that these additional features do not change the relative performance of the algorithm. The graphs show significant impact of the class information, especially in the mid range of recall values. In the low recall part of the chart the curves meet. This indicates that when the keyword match is really strong (i.e. when the ad is almost contained within the page) the precision 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Alpha=.9, no phrase Alpha=0, no phrase Alpha=0, phrase Alpha=.9, phrase Figure 3: Data Set 2: Precision vs. Recall of syntactic match (α = 0) vs. syntactic-semantic match (α = 0.8) α Kendall``s τ α = 0 0.086 α = 0.25 0.155 α = 0.50 0.166 α = 0.75 0.158 α = 1 0.136 Table 2: Kendall``s τ for different α values of the syntactic keyword match is comparable with that of the semantic-syntactic match. Note however that the two methods might produce different ads and could be used as a complement at level of recall. The semantic components provides largest lift in precision at the mid range of recall where 25% improvement is achieved by using the class information for ad placement. This means that when there is somewhat of a match between the ad and the page, the restriction to the right classes provides a better scope for selecting the ads. Table 2 shows the Kendall``s τ values for different values of α. We calculated the values by ranking all the judged ads for each page and averaging the values over all the pages. The ads with tied judgment were given the rank of the middle position. The results show that when we take into account all the ad-page pairs, the optimal value of α is around 0.5. Note that purely syntactic match (α = 0) is by far the weakest method. Figure 4 shows the effect of the parameter α in the scoring. We see that in most cases precision grows or is flat when we increase α, except at the low level of recall where due to small number of data points there is a bit of jitter in the results. Table 3 shows the precision at positions 1, 3 and 5. Again, the purely syntactic method has clearly the lowest score by individual positions and the total number of correctly placed ads. The numbers are close due to the small number of the ads considered, but there are still some noticeable trends. For position 1 the optimal α is in the range of 0.25 to 0.75. For positions 3 and 5 the optimum is at α = 1. This also indicates that for those ads that have a very high keyword score, the semantic information is somewhat less important. If almost all the words in an ad appear in the page, this ad is Precision Vs Alpha for Different Levels of Recall (Data Set 2) 0.45 0.55 0.65 0.75 0.85 0.95 0 0.2 0.4 0.6 0.8 1 Alpha Precision 80% Recall 60% Recall 40% Recall 20% Recall Figure 4: Impact of α on precision for different levels of recall α #1 #3 #5 sum α = 0 80 70 68 218 α = 0.25 89 76 73 238 α = 0.5 89 74 73 236 α = 0.75 89 78 73 240 α = 1 86 79 74 239 Table 3: Precision at position 1, 3 and 5 likely to be relevant for this page. However when there is no such clear affinity, the class information becomes a dominant factor. 8. CONCLUSION Contextual advertising is the economic engine behind a large number of non-transactional sites on the Web. Studies have shown that one of the main success factors for contextual ads is their relevance to the surrounding content. All existing commercial contextual match solutions known to us evolved from search advertising solutions whereby a search query is matched to the bid phrase of the ads. A natural extension of search advertising is to extract phrases from the page and match them to the bid phrase of the ads. However, individual phrases and words might have multiple meanings and/or be unrelated to the overall topic of the page leading to miss-matched ads. In this paper we proposed a novel way of matching advertisements to web pages that rely on a topical (semantic) match as a major component of the relevance score. The semantic match relies on the classification of pages and ads into a 6000 nodes commercial advertising taxonomy to determine their topical distance. As the classification relies on the full content of the page, it is more robust than individual page phrases. The semantic match is complemented with a syntactic match and the final score is a convex combination of the two sub-scores with the relative weight of each determined by a parameter α. We evaluated the semantic-syntactic approach against a syntactic approach over a set of pages with different contextual advertising. As shown in our experimental evaluation, the optimal value of the parameter α depends on the precise objective of optimization (precision at particular position, precision at given recall). However in all cases the optimal value of α is between 0.25 and 0.9 indicating significant effect of the semantic score component. The effectiveness of the syntactic match depends on the quality of the pages used. In lower quality pages we are more likely to make classification errors that will then negatively impact the matching. We demonstrated that it is feasible to build a large scale classifier that has sufficient good precision for this application. We are currently examining how to employ machine learning algorithms to learn the optimal value of α based on a collection of features of the input pages. 9. REFERENCES [1] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. ACM, 1999. [2] Bernhard E. Boser, Isabelle Guyon, and Vladimir Vapnik. A training algorithm for optimal margin classifiers. In Computational Learning Theory, pages 144-152, 1992. [3] A. Z. Broder, D. Carmel, M. Herscovici, A. Soffer, and J. Zien. Efficient query evaluation using a two-level retrieval process. In CIKM ``03: Proc. of the twelfth intl. conf. on Information and knowledge management, pages 426-434, New York, NY, 2003. ACM. [4] P. Chatterjee, D. L. Hoffman, and T. P. Novak. Modeling the clickstream: Implications for web-based advertising efforts. Marketing Science, 22(4):520-541, 2003. [5] D. Fain and J. Pedersen. Sponsored search: A brief history. In In Proc. of the Second Workshop on Sponsored Search Auctions, 2006. Web publication, 2006. [6] S. C. Gates, W. Teiken, and K.-Shin F. Cheng. Taxonomies by the numbers: building high-performance taxonomies. In CIKM ``05: Proc. of the 14th ACM intl. conf. on Information and knowledge management, pages 568-577, New York, NY, 2005. ACM. [7] A. Lacerda, M. Cristo, M. Andre; G., W. Fan, N. Ziviani, and B. Ribeiro-Neto. Learning to advertise. In SIGIR ``06: Proc. of the 29th annual intl.. ACM SIGIR conf., pages 549-556, New York, NY, 2006. ACM. [8] B. Ribeiro-Neto, M. Cristo, P. B. Golgher, and E. S. de Moura. Impedance coupling in content-targeted advertising. In SIGIR ``05: Proc. of the 28th annual intl.. ACM SIGIR conf., pages 496-503, New York, NY, 2005. ACM. [9] J. Rocchio. Relevance feedback in information retrieval. In The SMART Retrieval System: Experiments in Automatic Document Processing, pages 313-323. PrenticeHall, 1971. [10] P. Sandeep, D. Agarwal, D. Chakrabarti, and V. Josifovski. Bandits for taxonomies: A model-based approach. In In Proc. of the SIAM intl. conf. on Data Mining, 2007. [11] T. Santner and D. Duffy. The Statistical Analysis of Discrete Data. Springer-Verlag, 1989. [12] R. Stata, K. Bharat, and F. Maghoul. The term vector database: fast access to indexing terms for web pages. Computer Networks, 33(1-6):247-255, 2000. [13] C. Wang, P. Zhang, R. Choi, and M. D. Eredita. Understanding consumers attitude toward advertising. In Eighth Americas conf. on Information System, pages 1143-1148, 2002. [14] W. Yih, J. Goodman, and V. R. Carvalho. Finding advertising keywords on web pages. In WWW ``06: Proc. of the 15th intl. conf. on World Wide Web, pages 213-222, New York, NY, 2006. ACM.
A Semantic Approach to Contextual Advertising ABSTRACT Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query. In CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience. With these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads. The SS market developed quicker than the CM market, and most textual ads are still characterized by "bid phrases" representing those queries where the advertisers would like to have their ad displayed. Hence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach. However, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads. To overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features. 1. INTRODUCTION Web advertising supports a large swath of today's Internet ecosystem. The total internet advertiser spend in US alone in 2006 is estimated at over 17 billion dollars with a growth rate of almost 20% year over year. A large part of this market consists of textual ads, that is, short text messages usually marked as "sponsored links" or similar. The main advertising channels used to distribute textual ads are: 1. Sponsored Search or Paid Search advertising which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query. All major current web search engines (Google, Yahoo!, and Microsoft) support such ads and act simultaneously as a search engine and an ad agency. 2. Contextual advertising or Context Match which refers to the placement of commercial ads within the content of a generic web page. In contextual advertising usually there is a commercial intermediary, called an ad-network, in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between publisher and ad-network) and improving user experience. Again, all major current web search engines (Google, Yahoo!, and Microsoft) provide such ad-networking services but there are also many smaller players. The SS market developed quicker than the CM market, and most textual ads are still characterized by "bid phrases" representing those queries where the advertisers would like to have their ad displayed. (See [5] for a "brief history"). However, today, almost all of the for-profit non-transactional web sites (that is, sites that do not sell anything directly) rely at least in part on revenue from context match. CM supports sites that range from individual bloggers and small niche communities to large publishers such as major newspapers. Without this model, the web would be a lot smaller! The prevalent pricing model for textual ads is that the advertisers pay a certain amount for every click on the advertisement (pay-per-click or PPC). There are also other models used: pay-per-impression, where the advertisers pay for the number of exposures of an ad and pay-per-action where the advertiser pays only if the ad leads to a sale or similar transaction. For simplicity, we only deal with the PPC model in this paper. Given a page, rather than placing generic ads, it seems preferable to have ads related to the content to provide a better user experience and thus to increase the probability of clicks. This intuition is supported by the analogy to conventional publishing where there are very successful magazines (e.g. Vogue) where a majority of the content is topical advertising (fashion in the case of Vogue) and by user studies that have confirmed that increased relevance increases the number of ad-clicks [4, 13]. Previous published approaches estimated the ad relevance based on co-occurrence of the same words or phrases within the ad and within the page (see [7, 8] and Section 3 for more details). However targeting mechanisms based solely on phrases found within the text of the page can lead to problems: For example, a page about a famous golfer named "John Maytag" might trigger an ad for "Maytag dishwashers" since Maytag is a popular brand. Another example could be a page describing the Chevy Tahoe truck (a popular vehicle in US) triggering an ad about "Lake Tahoe vacations". Polysemy is not the only culprit: there is a (maybe apocryphal) story about a lurid news item about a headless body found in a suitcase triggering an ad for Samsonite luggage! In all these examples the mismatch arises from the fact that the ads are not appropriate for the context. In order to solve this problem we propose a matching mechanism that combines a semantic phase with the traditional keyword matching, that is, a syntactic phase. The semantic phase classifies the page and the ads into a taxonomy of topics and uses the proximity of the ad and page classes as a factor in the ad ranking formula. Hence we favor ads that are topically related to the page and thus avoid the pitfalls of the purely syntactic approach. Furthermore, by using a hierarchical taxonomy we allow for the gradual generalization of the ad search space in the case when there are no ads matching the precise topic of the page. For example if the page is about an event in curling, a rare winter sport, and contains the words "Alpine Meadows", the system would still rank highly ads for skiing in Alpine Meadows as these ads belong to the class "skiing" which is a sibling of the class "curling" and both of these classes share the parent "winter sports". In some sense, the taxonomy classes are used to select the set of applicable ads and the keywords are used to narrow down the search to concepts that are of too small granularity to be in the taxonomy. The taxonomy contains nodes for topics that do not change fast, for example, brands of digital cameras, say "Canon". The keywords capture the specificity to a level that is more dynamic and granular. In the digital camera example this would correspond to the level of a particular model, say "Canon SD450" whose advertising life might be just a few months. Updating the taxonomy with new nodes or even new vocabulary each time a new model comes to the market is prohibitively expensive when we are dealing with millions of manufacturers. In addition to increased click through rate (CTR) due to increased relevance, a significant but harder to quantify benefit of the semantic-syntactic matching is that the resulting page has a unified feel and improves the user experience. In the Chevy Tahoe example above, the classifier would establish that the page is about cars/automotive and only those ads will be considered. Even if there are no ads for this particular Chevy model, the chosen ads will still be within the automotive domain. To implement our approach we need to solve a challenging problem: classify both pages and ads within a large taxonomy (so that the topic granularity would be small enough) with high precision (to reduce the probability of mis-match). We evaluated several classifiers and taxonomies and in this paper we present results using a taxonomy with close to 6000 nodes using a variation of the Rocchio's classifier [9]. This classifier gave the best results in both page and ad classification, and ultimately in ad relevance. The paper proceeds as follows. In the next section we review the basic principles of the contextual advertising. Section 3 overviews the related work. Section 4 describes the taxonomy and document classifier that were used for page and ad classification. Section 5 describes the semanticsyntactic method. In Section 6 we briefly discuss how to search efficiently the ad space in order to return the top-k ranked ads. Experimental evaluation is presented in Section 7. Finally, Section 8 presents the concluding remarks. 2. OVERVIEW OF CONTEXTUAL ADVERTISING Contextual advertising is an interplay of four players: 9 The publisher is the owner of the web pages on which the advertising is displayed. The publisher typically aims to maximize advertising revenue while providing a good user experience. 9 The advertiser provides the supply of ads. Usually the activity of the advertisers are organized around campaigns which are defined by a set of ads with a particular temporal and thematic goal (e.g. sale of digital cameras during the holiday season). As in traditional advertising, the goal of the advertisers can be broadly defined as the promotion of products or services. 9 The ad network is a mediator between the advertiser and the publisher and selects the ads that are put on the pages. The ad-network shares the advertisement revenue with the publisher. 9 Users visit the web pages of the publisher and interact with the ads. Contextual advertising usually falls into the category of direct marketing (as opposed to brand advertising), that is advertising whose aim is a "direct response" where the effect of an campaign is measured by the user reaction. One of the advantages of online advertising in general and contextual advertising in particular is that, compared to the traditional media, it is relatively easy to measure the user response. Usually the desired immediate reaction is for the user to follow the link in the ad and visit the advertiser's web site and, as noted, the prevalent financial model is that the advertiser pays a certain amount for every click on the advertisement (PPC). The revenue is shared between the publisher and the network. Context match advertising has grown from Sponsored Search advertising, which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query. In most networks, the amount paid by the advertiser for each SS click is determined by an auction process where the advertisers place bids on a search phrase, and their position in the tower of ads displayed in conjunction with the result is determined by their bid. Thus each ad is annotated with one or more bid phrases. The bid phrase has no direct bearing on the ad placement in CM. However, it is a concise description of target ad audience as determined by the advertiser and it has been shown to be an important feature for successful CM ad placement [8]. In addition to the bid phrase, an ad is also characterized by a title usually displayed in a bold font, and an abstract or creative, which is the few lines of text, usually less than 120 characters, displayed on the page. The ad-network model aligns the interests of the publishers, advertisers and the network. In general, clicks bring benefits to both the publisher and the ad network by providing revenue, and to the advertiser by bringing traffic to the target web site. The revenue of the network, given a page p, can be estimated as: where k is the number of ads displayed on page p and price (ai, i) is the click-price of the current ad ai at position i. The price in this model depends on the set of ads presented on the page. Several models have been proposed to determine the price, most of them based on generalizations of second price auctions. However, for simplicity we ignore the pricing model and concentrate on finding ads that will maximize the first term of the product, that is we search for arg max P (click | p, ai) i Furthermore we assume that the probability of click for a given ad and page is determined by its relevance score with respect to the page, thus ignoring the positional effect of the ad placement on the page. We assume that this is an orthogonal factor to the relevance component and could be easily incorporated in the model. 3. RELATED WORK Online advertising in general and contextual advertising in particular are emerging areas of research. The published literature is very sparse. A study presented in [13] confirms the intuition that ads need to be relevant to the user's interest to avoid degrading the user's experience and increase the probability of reaction. A recent work by Ribeiro-Neto et. al. [8] examines a number of strategies to match pages to ads based on extracted keywords. The ads and pages are represented as vectors in a vector space. The first five strategies proposed in that work match the pages and the ads based on the cosine of the angle between the ad vector and the page vector. To find out the important part of the ad, the authors explore using different ad sections (bid phrase, title, body) as a basis for the ad vector. The winning strategy out of the first five requires the bid phrase to appear on the page and then ranks all such ads by the cosine of the union of all the ad sections and the page vectors. While both pages and ads are mapped to the same space, there is a discrepancy (impendence mismatch) between the vocabulary used in the ads and in the pages. Furthermore, since in the vector model the dimensions are determined by the number of unique words, plain cosine similarity will not take into account synonyms. To solve this problem, Ribeiro-Neto et al expand the page vocabulary with terms from other similar pages weighted based on the overall similarity of the origin page to the matched page, and show improved matching precision. In a follow-up work [7] the authors propose a method to learn impact of individual features using genetic programming to produce a matching function. The function is represented as a tree composed of arithmetic operators and the log function as internal nodes, and different numerical features of the query and ad terms as leafs. The results show that genetic programming finds matching functions that significantly improve the matching compared to the best method (without page side expansion) reported in [8]. Another approach to contextual advertising is to reduce it to the problem of sponsored search advertising by extracting phrases from the page and matching them with the bid phrase of the ads. In [14] a system for phrase extraction is described that used a variety of features to determine the importance of page phrases for advertising purposes. The system is trained with pages that have been hand annotated with important phrases. The learning algorithm takes into account features based on tf-idf, html meta data and query logs to detect the most important phrases. During evaluation, each page phrase up to length 5 is considered as potential result and evaluated against a trained classifier. In our work we also experimented with a phrase extractor based on the work reported in [12]. While increasing slightly the precision, it did not change the relative performance of the explored algorithms. 4. PAGE AND AD CLASSIFICATION 4.1 Taxonomy Choice The semantic match of the pages and the ads is performed by classifying both into a common taxonomy. The matching process requires that the taxonomy provides sufficient differentiation between the common commercial topics. For example, classifying all medical related pages into one node will not result into a good classification since both "sore foot" and "flu" pages will end up in the same node. The ads suitable for these two concepts are, however, very different. To obtain sufficient resolution, we used a taxonomy of around 6000 nodes primarily built for classifying commercial interest queries, rather than pages or ads. This taxonomy has been commercially built by Yahoo! US. We will explain below how we can use the same taxonomy to classify pages and ads as well. Each node in our source taxonomy is represented as a collection of exemplary bid phrases or queries that correspond to that node concept. Each node has on average around 100 queries. The queries placed in the taxonomy are high volume queries and queries of high interest to advertisers, as indicated by an unusually high cost-per-click (CPC) price. The taxonomy has been populated by human editors using keyword suggestions tools similar to the ones used by ad networks to suggest keywords to advertisers. After initial seeding with a few queries, using the provided tools a human editor can add several hundreds queries to a given node. Nevertheless, it has been a significant effort to develop this 6000-nodes taxonomy and it has required several person-years of work. A similar-in-spirit process for building enterprise taxonomies via queries has been presented in [6]. However, the details and tools are completely different. Figure 1 provides some statistics about the taxonomy used in this work. 4.2 Classification Method As explained, the semantic phase of the matching relies on ads and pages being topically close. Thus we need to classify pages into the same taxonomy used to classify ads. In this section we overview the methods we used to build a page and an ad classifier pair. The detailed description and evaluation of this process is outside the scope of this paper. Given the taxonomy of queries (or bid-phrases--we use these terms interchangeably) described in the previous section, we tried three methods to build corresponding page and ad classifiers. For the first two methods we tried to find exemplary pages and ads for each concept as follows: Figure 1: Taxonomy statistics: categories per level; fanout for non-leaf nodes; and queries per node We generated a page training set by running the queries in the taxonomy over a Web search index and using the top 10 results after some filtering as documents labeled with the query's label. On the ad side we generated a training set for each class by selecting the ads that have a bid phrase assigned to this class. Using this training sets we then trained a hierarchical SVM [2] (one against all between every group of siblings) and a log-regression [11] classifier. (The second method differs from the first in the type of secondary filtering used. This filtering eliminates low content pages, pages deemed unsuitable for advertising, pages that lead to excessive class confusion, etc.) However, we obtained the best performance by using the third document classifier, based on the information in the source taxonomy queries only. For each taxonomy node we concatenated all the exemplary queries into a single metadocument. We then used the meta document as a centroid for a nearest-neighbor classifier based on Rocchio's framework [9] with only positive examples and no relevance feedback. Each centroid is defined as a sum of the tf-idf values of each term, normalized by the number of queries in the class where ~ cj is the centroid for class Cj; q iterates over the queries in a particular class. The classification is based on the cosine of the angle between the document d and the centroid meta-documents: where F is the set of features. The score is normalized by the document and class length to produce comparable score. The terms ci and di represent the weight of the ith feature in the class centroid and the document respectively. These weights are based on the standard tf-idf formula. As the score of the max class is normalized with regard to document length, the scores for different documents are comparable. We conducted tests using professional editors to judge the quality of page and ad class assignments. The tests showed that for both ads and pages the Rocchio classifier returned the best results, especially on the page side. This is probably a result of the noise induced by using a search engine to machine generate training pages for the SVM and logregression classifiers. It is an area of current investigation how to improve the classification using a noisy training set. Based on the test results we decided to use the Rocchio's classifier on both the ad and the page side. 5. SEMANTIC-SYNTACTIC MATCHING Contextual advertising systems process the content of the page, extract features, and then search the ad space to find the best matching ads. Given a page p and a set of ads A = {a1...as} we estimate the relative probability of click P (click | p, a) with a score that captures the quality of the match between the page and the ad. To find the best ads for a page we rank the ads in A and select the top few for display. The problem can be formally defined as matching every page in the set of all pages P = {p1,...ppc} to one or more ads in the set of ads. Each page is represented as a set of page sections pi = {pi,1, pi,2...pi, m}. The sections of the page represent different structural parts, such as title, metadata, body, headings, etc. . In turn, each section is an unordered bag of terms (keywords). A page is represented by the union of the terms in each section: pi = {pws11, pws12...pwsim} where pw stands for a page word and the superscript indicates the section of each term. A term can be a unigram or a phrase extracted by a phrase extractor [12]. Similarly, we represent each ad as a set of sections a = {a1, a2,...al}, each section in turn being an unordered set of terms: where aw is an ad word. The ads in our experiments have 3 sections: title, body, and bid phrase. In this work, to produce the match score we use only the ad/page textual information, leaving user information and other data for future work. Next, each page and ad term is associated with a weight based on the tf-idf values. The tf value is determined based on the individual ad sections. There are several choices for the value of idf, based on different scopes. On the ad side, it has been shown in previous work that the set of ads of one campaign provide good scope for the estimation of idf that leads to improved matching results [8]. However, in this work for simplicity we do not take into account campaigns. To combine the impact of the term's section and its tf-idf score, the ad/page term weight is defined as: where tWeight stands for term weight and weightSection (Si) is the weight assigned to a page or ad section. This weight is a fixed parameter determined by experimentation. Each ad and page is classified into the topical taxonomy. We define these two mappings: where pc and ac are page and ad classes correspondingly. Each assignment is associated with a weight given by the classifier. The weights are normalized to sum to 1: where xi is either a page or an ad, and cWeights (c) is the class weight - normalized confidence assigned by the classifier. The number of classes can vary between different pages and ads. This corresponds to the number of topics a page/ad can be associated with and is almost always in the range 1-4. Now we define the relevance score of an ad ai and page pi as a convex combination of the keyword (syntactic) and classification (semantic) score: The parameter α determines the relative weight of the taxonomy score and the keyword score. To calculate the keyword score we use the vector space model [1] where both the pages and ads are represented in n-dimensional space - one dimension for each distinct term. The magnitude of each dimension is determined by the tWeight () formula. The keyword score is then defined as the cosine of the angle between the page and the ad vectors: where K is the set of all the keywords. The formula assumes independence between the words in the pages and ads. Furthermore, it ignores the order and the proximity of the terms in the scoring. We experimented with the impact of phrases and proximity on the keyword score and did not see a substantial impact of these two factors. We now turn to the definition of the TaxScore. This function indicates the topical match between a given ad and a page. As opposed to the keywords that are treated as independent dimensions, here the classes (topics) are organized into a hierarchy. One of the goals in the design of the TaxScore function is to be able to generalize within the taxonomy, that is accept topically related ads. Generalization can help to place ads in cases when there is no ad that matches both the category and the keywords of the page. The example in Figure 2 illustrates this situation. In this example, in the absence of ski ads, a page about skiing containing the word "Atomic" could be matched to the available snowboarding ad for the same brand. In general we would like the match to be stronger when both the ad and the page are classified into the same node Figure 2: Two generalization paths and weaker when the distance between the nodes in the taxonomy gets larger. There are multiple ways to specify the distance between two taxonomy nodes. Besides the above requirement, this function should lend itself to an efficient search of the ad space. Given a page we have to find the ad in a few milliseconds, as this impacts the presentation to a waiting user. This will be further discussed in the next section. To capture both the generalization and efficiency requirements we define the TaxScore function as follows: In this function we consider every combination of page class and ad class. For each combination we multiply the product of the class weights with the inverse distance function between the least common ancestor of the two classes (LCA) and the ad class. The inverse distance function idist (c1, c2) takes two nodes on the same path in the class taxonomy and returns a number in the interval [0, 1] depending on the distance of the two class nodes. It returns 1 if the two nodes are the same, and declines toward 0 when LCA (pc, ac) or ac is the root of the taxonomy. The rate of decline determines the weight of the generalization versus the other terms in the scoring formula. To determine the rate of decline we consider the impact on the specificity of the match when we substitute a class with one of its ancestors. In general the impact is topic dependent. For example the node "Hobby" in our taxonomy has tens of children, each representing a different hobby, two examples being "Sailing" and "Knitting". Placing an ad about "Knitting" on a page about "Sailing" does not make lots of sense. However, in the "Winter Sports" example above, in the absence of better alternative, skiing ads could be put on snowboarding pages as they might promote the same venues, equipment vendors etc. . Such detailed analysis on a case by case basis is prohibitively expensive due to the size of the taxonomy. One option is to use the confidences of the ancestor classes as given by the classifier. However we found these numbers not suitable for this purpose as the magnitude of the confidences does not necessarily decrease when going up the tree. Another option is to use explore-exploit methods based = on machine-learning from the click feedback as described in [10]. However for simplicity, in this work we chose a simple heuristic to determine the cost of generalization from a child to a parent. In this heuristic we look at the broadening of the scope when moving from a child to a parent. We estimate the broadening by the density of ads classified in the parent nodes vs the child node. The density is obtained by classifying a large set of ads in the taxonomy using the document classifier described above. Based on this, let n. be the number of document classified into the subtree rooted at c. Then we define: idist (c, p) = n. np where c represents the child node and p is the parent node. This fraction can be viewed as a probability of an ad belonging to the parent topic being suitable for the child topic. 6. SEARCHING THE AD SPACE In the previous section we discussed the choice of scoring function to estimate the match between an ad and a page. The top-k ads with the highest score are offered by the system for placement on the publisher's page. The process of score calculation and ad selection is to be done in real time and therefore must be very efficient. As the ad collections are in the range of hundreds of millions of entries, there is a need for indexed access to the ads. Inverted indexes provide scalable and low latency solutions for searching documents. However, these have been traditionally used to search based on keywords. To be able to search the ads on a combination of keywords and classes we have mapped the classification match to term match and adapted the scoring function to be suitable for fast evaluation over inverted indexes. In this section we overview the ad indexing and the ranking function of our prototype ad search system for matching ads and pages. We used a standard inverted index framework where there is one posting list for each distinct term. The ads are parsed into terms and each term is associated with a weight based on the section in which it appears. Weights from distinct occurrences of a term in an ad are added together, so that the posting lists contain one entry per term/ad combination. The next challenge is how to index the ads so that the class information is preserved in the index? A simple method is to create unique meta-terms for the classes and annotate each ad with one meta-term for each assigned class. However this method does not allow for generalization, since only the ads matching an exact label of the page would be selected. Therefore we annotated each ad with one meta-term for each ancestor of the assigned class. The weights of meta-terms are assigned according to the value of the idist () function defined in the previous section. On the query side, given the keywords and the class of a page, we compose a keyword only query by inserting one class term for each ancestor of the classes assigned to the page. The scoring function is adapted to the two part score one for the class meta-terms and another for the text term. During the class score calculation, for each class path we use only the lowest class meta-term, ignoring the others. For example, if an ad belongs to the "Skiing" class and is annotated with both "Skiing" and its parent "Winter Sports", the index will contain the special class meta-terms for both "Skiing" and "Winter Sports" (and all their ancestors) with the weights according to the product of the classifier confidence and the idist function. When matching with a page classified into "Skiing", the query will contain terms for class "Skiing" and for each of its ancestors. However when scoring an ad classified into "Skiing" we will use the weight for the "Skiing" class meta-term. Ads classified into "Snowboarding" will be scored using the weight of the "Winter Sports" meta-term. To make this check efficiently we keep a sorted list of all the class paths and, at scoring time, we search the paths bottom up for a meta-term appearing in the ad. The first meta-term is used for scoring, the rest are ignored. At runtime, we evaluate the query using a variant of the WAND algorithm [3]. This is a document-at-a-time algorithm [1] that uses a branch-and-bound approach to derive efficient moves for the cursors associated to the postings lists. It finds the next cursor to be moved based on an upper bound of the score for the documents at which the cursors are currently positioned. The algorithm keeps a heap of current best candidates. Documents with an upper bound smaller than the current minimum score among the candidate documents can be eliminated from further considerations, and thus the cursors can skip over them. To find the upper bound for a document, the algorithm assumes that all cursors that are before it will hit this document (i.e. the document contains all those terms represented by cursors before or at that document). It has been shown that WAND can be used with any function that is monotonic with respect to the number of matching terms in the document. Our scoring function is monotonic since the score can never decrease when more terms are found in the document. In the special case when we add a cursor representing an ancestor of a class term already factored in the score, the score simply does not change (we add 0). Given these properties, we use an adaptation of the WAND algorithm where we change the calculation of the scoring function and the upper bound score calculation to reflect our scoring function. The rest of the algorithm remains unchanged. 7. EXPERIMENTAL EVALUATION 7.1 Data and Methodology We quantify the effect of the semantic-syntactic matching using a set of 105 pages. This set of pages was selected by a random sample of a larger set of around 20 million pages with contextual advertising. Ads for each of these pages have been selected from a larger pool of ads (tens of millions) by previous experiments conducted by Yahoo! US for other purposes. Each page-ad pair has been judged by three or more human judges on a 1 to 3 scale: 1. Relevant The ad is semantically directly related to the main subject of the page. For example if the page is about the National Football League and the ad is about tickets for NFL games, it would be scored as 1. 2. Somewhat relevant The ad is related to the secondary subject of the page, or is related to the main topic of the page in a general way. In the NFL page example, an ad about NFL branded products would be judged as 2. 3. Irrelevant The ad is unrelated to the page. For example a mention of the NFL player John Maytag triggers washing machine ads on a NFL page. Table 1: Dataset statistics To obtain a score for a page-ad pair we average all the scores and then round to the closest integer. We then used these judgments to evaluate how well our methods distinguish the positive and the negative ad assignments for each page. The statistics of the page dataset is given in Table 1. The original experiments that paired the pages and the ads are loosely related to the syntactic keyword based matching and classification based assignment but used different taxonomies and keyword extraction techniques. Therefore we could not use standard pooling as an evaluation method since we did not control the way the pairs were selected and could not precisely establish the set of ads from which the placed ads were selected. Instead, in our evaluation for each page we consider only those ads for which we have judgment. Each different method was applied to this set and the ads were ranked by the score. The relative effectiveness of the algorithms were judged by comparing how well the methods separated the ads with positive judgment from the ads with negative judgment. We present precision on various levels of recall within this set. As the set of ads per page is relatively small, the evaluation reports precision that is higher than it would be with a larger set of negative ads. However, these numbers still establish the relative performance of the algorithms and we can use it to evaluate performance at different score thresholds. In addition to the precision-recall over the judged ads, we also present Kendall's T rank correlation coefficient to establish how far from the perfect ordering are the orderings produced by our ranking algorithms. For this test we ranked the judged ads by the scores assigned by the judges and then compared this order to the order assigned by our algorithms. Finally we also examined the precision at position 1, 3 and 5. 7.2 Results Figure 3 shows the precision recall curves for the syntactic matching (keywords only used) vs. a syntactic-semantic matching with the optimal value of a = 0.8 (judged by the 11-point score [1]). In this figure, we assume that the adpage pairs judged with 1 or 2 are positive examples and the 3s are negative examples. We also examined counting only the pairs judged with 1 as positive examples and did not find a significant change in the relative performance of the tested methods. Overlaid are also results using phrases in the keyword match. We see that these additional features do not change the relative performance of the algorithm. The graphs show significant impact of the class information, especially in the mid range of recall values. In the low recall part of the chart the curves meet. This indicates that when the keyword match is really strong (i.e. when the ad is almost contained within the page) the precision Figure 3: Data Set 2: Precision vs. Recall of syntactic match (a = 0) vs. syntactic-semantic match (a = 0.8) Table 2: Kendall's T for different a values of the syntactic keyword match is comparable with that of the semantic-syntactic match. Note however that the two methods might produce different ads and could be used as a complement at level of recall. The semantic components provides largest lift in precision at the mid range of recall where 25% improvement is achieved by using the class information for ad placement. This means that when there is somewhat of a match between the ad and the page, the restriction to the right classes provides a better scope for selecting the ads. Table 2 shows the Kendall's T values for different values of a. We calculated the values by ranking all the judged ads for each page and averaging the values over all the pages. The ads with tied judgment were given the rank of the middle position. The results show that when we take into account all the ad-page pairs, the optimal value of a is around 0.5. Note that purely syntactic match (a = 0) is by far the weakest method. Figure 4 shows the effect of the parameter a in the scoring. We see that in most cases precision grows or is flat when we increase a, except at the low level of recall where due to small number of data points there is a bit of jitter in the results. Table 3 shows the precision at positions 1, 3 and 5. Again, the purely syntactic method has clearly the lowest score by individual positions and the total number of correctly placed ads. The numbers are close due to the small number of the ads considered, but there are still some noticeable trends. For position 1 the optimal a is in the range of 0.25 to 0.75. For positions 3 and 5 the optimum is at a = 1. This also indicates that for those ads that have a very high keyword score, the semantic information is somewhat less important. If almost all the words in an ad appear in the page, this ad is Figure 4: Impact of α on precision for different levels of recall α #1 #3 #5 sum α = 0 80 70 68 218 α = 0.25 89 76 73 238 α = 0.5 89 74 73 236 α = 0.75 89 78 73 240 α = 1 86 79 74 239 Table 3: Precision at position 1, 3 and 5 likely to be relevant for this page. However when there is no such clear affinity, the class information becomes a dominant factor. 8. CONCLUSION Contextual advertising is the economic engine behind a large number of non-transactional sites on the Web. Studies have shown that one of the main success factors for contextual ads is their relevance to the surrounding content. All existing commercial contextual match solutions known to us evolved from search advertising solutions whereby a search query is matched to the bid phrase of the ads. A natural extension of search advertising is to extract phrases from the page and match them to the bid phrase of the ads. However, individual phrases and words might have multiple meanings and/or be unrelated to the overall topic of the page leading to miss-matched ads. In this paper we proposed a novel way of matching advertisements to web pages that rely on a topical (semantic) match as a major component of the relevance score. The semantic match relies on the classification of pages and ads into a 6000 nodes commercial advertising taxonomy to determine their topical distance. As the classification relies on the full content of the page, it is more robust than individual page phrases. The semantic match is complemented with a syntactic match and the final score is a convex combination of the two sub-scores with the relative weight of each determined by a parameter α. We evaluated the semantic-syntactic approach against a syntactic approach over a set of pages with different contextual advertising. As shown in our experimental evaluation, the optimal value of the parameter α depends on the precise objective of optimization (precision at particular position, precision at given recall). However in all cases the optimal value of α is between 0.25 and 0.9 indicating significant effect of the semantic score component. The effectiveness of the syntactic match depends on the quality of the pages used. In lower quality pages we are more likely to make classification errors that will then negatively impact the matching. We demonstrated that it is feasible to build a large scale classifier that has sufficient good precision for this application. We are currently examining how to employ machine learning algorithms to learn the optimal value of α based on a collection of features of the input pages.
A Semantic Approach to Contextual Advertising ABSTRACT Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query. In CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience. With these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads. The SS market developed quicker than the CM market, and most textual ads are still characterized by "bid phrases" representing those queries where the advertisers would like to have their ad displayed. Hence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach. However, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads. To overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features. 1. INTRODUCTION Web advertising supports a large swath of today's Internet ecosystem. The total internet advertiser spend in US alone in 2006 is estimated at over 17 billion dollars with a growth rate of almost 20% year over year. A large part of this market consists of textual ads, that is, short text messages usually marked as "sponsored links" or similar. The main advertising channels used to distribute textual ads are: 1. Sponsored Search or Paid Search advertising which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query. All major current web search engines (Google, Yahoo!, and Microsoft) support such ads and act simultaneously as a search engine and an ad agency. 2. Contextual advertising or Context Match which refers to the placement of commercial ads within the content of a generic web page. In contextual advertising usually there is a commercial intermediary, called an ad-network, in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between publisher and ad-network) and improving user experience. Again, all major current web search engines (Google, Yahoo!, and Microsoft) provide such ad-networking services but there are also many smaller players. The SS market developed quicker than the CM market, and most textual ads are still characterized by "bid phrases" representing those queries where the advertisers would like to have their ad displayed. (See [5] for a "brief history"). However, today, almost all of the for-profit non-transactional web sites (that is, sites that do not sell anything directly) rely at least in part on revenue from context match. CM supports sites that range from individual bloggers and small niche communities to large publishers such as major newspapers. Without this model, the web would be a lot smaller! The prevalent pricing model for textual ads is that the advertisers pay a certain amount for every click on the advertisement (pay-per-click or PPC). There are also other models used: pay-per-impression, where the advertisers pay for the number of exposures of an ad and pay-per-action where the advertiser pays only if the ad leads to a sale or similar transaction. For simplicity, we only deal with the PPC model in this paper. Given a page, rather than placing generic ads, it seems preferable to have ads related to the content to provide a better user experience and thus to increase the probability of clicks. This intuition is supported by the analogy to conventional publishing where there are very successful magazines (e.g. Vogue) where a majority of the content is topical advertising (fashion in the case of Vogue) and by user studies that have confirmed that increased relevance increases the number of ad-clicks [4, 13]. Previous published approaches estimated the ad relevance based on co-occurrence of the same words or phrases within the ad and within the page (see [7, 8] and Section 3 for more details). However targeting mechanisms based solely on phrases found within the text of the page can lead to problems: For example, a page about a famous golfer named "John Maytag" might trigger an ad for "Maytag dishwashers" since Maytag is a popular brand. Another example could be a page describing the Chevy Tahoe truck (a popular vehicle in US) triggering an ad about "Lake Tahoe vacations". Polysemy is not the only culprit: there is a (maybe apocryphal) story about a lurid news item about a headless body found in a suitcase triggering an ad for Samsonite luggage! In all these examples the mismatch arises from the fact that the ads are not appropriate for the context. In order to solve this problem we propose a matching mechanism that combines a semantic phase with the traditional keyword matching, that is, a syntactic phase. The semantic phase classifies the page and the ads into a taxonomy of topics and uses the proximity of the ad and page classes as a factor in the ad ranking formula. Hence we favor ads that are topically related to the page and thus avoid the pitfalls of the purely syntactic approach. Furthermore, by using a hierarchical taxonomy we allow for the gradual generalization of the ad search space in the case when there are no ads matching the precise topic of the page. For example if the page is about an event in curling, a rare winter sport, and contains the words "Alpine Meadows", the system would still rank highly ads for skiing in Alpine Meadows as these ads belong to the class "skiing" which is a sibling of the class "curling" and both of these classes share the parent "winter sports". In some sense, the taxonomy classes are used to select the set of applicable ads and the keywords are used to narrow down the search to concepts that are of too small granularity to be in the taxonomy. The taxonomy contains nodes for topics that do not change fast, for example, brands of digital cameras, say "Canon". The keywords capture the specificity to a level that is more dynamic and granular. In the digital camera example this would correspond to the level of a particular model, say "Canon SD450" whose advertising life might be just a few months. Updating the taxonomy with new nodes or even new vocabulary each time a new model comes to the market is prohibitively expensive when we are dealing with millions of manufacturers. In addition to increased click through rate (CTR) due to increased relevance, a significant but harder to quantify benefit of the semantic-syntactic matching is that the resulting page has a unified feel and improves the user experience. In the Chevy Tahoe example above, the classifier would establish that the page is about cars/automotive and only those ads will be considered. Even if there are no ads for this particular Chevy model, the chosen ads will still be within the automotive domain. To implement our approach we need to solve a challenging problem: classify both pages and ads within a large taxonomy (so that the topic granularity would be small enough) with high precision (to reduce the probability of mis-match). We evaluated several classifiers and taxonomies and in this paper we present results using a taxonomy with close to 6000 nodes using a variation of the Rocchio's classifier [9]. This classifier gave the best results in both page and ad classification, and ultimately in ad relevance. The paper proceeds as follows. In the next section we review the basic principles of the contextual advertising. Section 3 overviews the related work. Section 4 describes the taxonomy and document classifier that were used for page and ad classification. Section 5 describes the semanticsyntactic method. In Section 6 we briefly discuss how to search efficiently the ad space in order to return the top-k ranked ads. Experimental evaluation is presented in Section 7. Finally, Section 8 presents the concluding remarks. 2. OVERVIEW OF CONTEXTUAL ADVERTISING 3. RELATED WORK Online advertising in general and contextual advertising in particular are emerging areas of research. The published literature is very sparse. A study presented in [13] confirms the intuition that ads need to be relevant to the user's interest to avoid degrading the user's experience and increase the probability of reaction. A recent work by Ribeiro-Neto et. al. [8] examines a number of strategies to match pages to ads based on extracted keywords. The ads and pages are represented as vectors in a vector space. The first five strategies proposed in that work match the pages and the ads based on the cosine of the angle between the ad vector and the page vector. To find out the important part of the ad, the authors explore using different ad sections (bid phrase, title, body) as a basis for the ad vector. The winning strategy out of the first five requires the bid phrase to appear on the page and then ranks all such ads by the cosine of the union of all the ad sections and the page vectors. While both pages and ads are mapped to the same space, there is a discrepancy (impendence mismatch) between the vocabulary used in the ads and in the pages. Furthermore, since in the vector model the dimensions are determined by the number of unique words, plain cosine similarity will not take into account synonyms. To solve this problem, Ribeiro-Neto et al expand the page vocabulary with terms from other similar pages weighted based on the overall similarity of the origin page to the matched page, and show improved matching precision. In a follow-up work [7] the authors propose a method to learn impact of individual features using genetic programming to produce a matching function. The function is represented as a tree composed of arithmetic operators and the log function as internal nodes, and different numerical features of the query and ad terms as leafs. The results show that genetic programming finds matching functions that significantly improve the matching compared to the best method (without page side expansion) reported in [8]. Another approach to contextual advertising is to reduce it to the problem of sponsored search advertising by extracting phrases from the page and matching them with the bid phrase of the ads. In [14] a system for phrase extraction is described that used a variety of features to determine the importance of page phrases for advertising purposes. The system is trained with pages that have been hand annotated with important phrases. The learning algorithm takes into account features based on tf-idf, html meta data and query logs to detect the most important phrases. During evaluation, each page phrase up to length 5 is considered as potential result and evaluated against a trained classifier. In our work we also experimented with a phrase extractor based on the work reported in [12]. While increasing slightly the precision, it did not change the relative performance of the explored algorithms. 4. PAGE AND AD CLASSIFICATION 4.1 Taxonomy Choice 4.2 Classification Method 5. SEMANTIC-SYNTACTIC MATCHING 6. SEARCHING THE AD SPACE 7. EXPERIMENTAL EVALUATION 7.1 Data and Methodology 7.2 Results 8. CONCLUSION Contextual advertising is the economic engine behind a large number of non-transactional sites on the Web. Studies have shown that one of the main success factors for contextual ads is their relevance to the surrounding content. All existing commercial contextual match solutions known to us evolved from search advertising solutions whereby a search query is matched to the bid phrase of the ads. A natural extension of search advertising is to extract phrases from the page and match them to the bid phrase of the ads. However, individual phrases and words might have multiple meanings and/or be unrelated to the overall topic of the page leading to miss-matched ads. In this paper we proposed a novel way of matching advertisements to web pages that rely on a topical (semantic) match as a major component of the relevance score. The semantic match relies on the classification of pages and ads into a 6000 nodes commercial advertising taxonomy to determine their topical distance. As the classification relies on the full content of the page, it is more robust than individual page phrases. The semantic match is complemented with a syntactic match and the final score is a convex combination of the two sub-scores with the relative weight of each determined by a parameter α. We evaluated the semantic-syntactic approach against a syntactic approach over a set of pages with different contextual advertising. As shown in our experimental evaluation, the optimal value of the parameter α depends on the precise objective of optimization (precision at particular position, precision at given recall). However in all cases the optimal value of α is between 0.25 and 0.9 indicating significant effect of the semantic score component. The effectiveness of the syntactic match depends on the quality of the pages used. In lower quality pages we are more likely to make classification errors that will then negatively impact the matching. We demonstrated that it is feasible to build a large scale classifier that has sufficient good precision for this application. We are currently examining how to employ machine learning algorithms to learn the optimal value of α based on a collection of features of the input pages.
A Semantic Approach to Contextual Advertising ABSTRACT Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query. In CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience. With these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads. The SS market developed quicker than the CM market, and most textual ads are still characterized by "bid phrases" representing those queries where the advertisers would like to have their ad displayed. Hence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach. However, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads. To overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features. 1. INTRODUCTION Web advertising supports a large swath of today's Internet ecosystem. A large part of this market consists of textual ads, that is, short text messages usually marked as "sponsored links" or similar. The main advertising channels used to distribute textual ads are: 1. Sponsored Search or Paid Search advertising which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query. All major current web search engines (Google, Yahoo!, and Microsoft) support such ads and act simultaneously as a search engine and an ad agency. 2. Contextual advertising or Context Match which refers to the placement of commercial ads within the content of a generic web page. In contextual advertising usually there is a commercial intermediary, called an ad-network, in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between publisher and ad-network) and improving user experience. Again, all major current web search engines (Google, Yahoo!, and Microsoft) provide such ad-networking services but there are also many smaller players. The SS market developed quicker than the CM market, and most textual ads are still characterized by "bid phrases" representing those queries where the advertisers would like to have their ad displayed. CM supports sites that range from individual bloggers and small niche communities to large publishers such as major newspapers. Without this model, the web would be a lot smaller! The prevalent pricing model for textual ads is that the advertisers pay a certain amount for every click on the advertisement (pay-per-click or PPC). There are also other models used: pay-per-impression, where the advertisers pay for the number of exposures of an ad and pay-per-action where the advertiser pays only if the ad leads to a sale or similar transaction. For simplicity, we only deal with the PPC model in this paper. Given a page, rather than placing generic ads, it seems preferable to have ads related to the content to provide a better user experience and thus to increase the probability of clicks. Previous published approaches estimated the ad relevance based on co-occurrence of the same words or phrases within the ad and within the page (see [7, 8] and Section 3 for more details). However targeting mechanisms based solely on phrases found within the text of the page can lead to problems: For example, a page about a famous golfer named "John Maytag" might trigger an ad for "Maytag dishwashers" since Maytag is a popular brand. Another example could be a page describing the Chevy Tahoe truck (a popular vehicle in US) triggering an ad about "Lake Tahoe vacations". In order to solve this problem we propose a matching mechanism that combines a semantic phase with the traditional keyword matching, that is, a syntactic phase. The semantic phase classifies the page and the ads into a taxonomy of topics and uses the proximity of the ad and page classes as a factor in the ad ranking formula. Hence we favor ads that are topically related to the page and thus avoid the pitfalls of the purely syntactic approach. Furthermore, by using a hierarchical taxonomy we allow for the gradual generalization of the ad search space in the case when there are no ads matching the precise topic of the page. In some sense, the taxonomy classes are used to select the set of applicable ads and the keywords are used to narrow down the search to concepts that are of too small granularity to be in the taxonomy. The taxonomy contains nodes for topics that do not change fast, for example, brands of digital cameras, say "Canon". In the digital camera example this would correspond to the level of a particular model, say "Canon SD450" whose advertising life might be just a few months. In addition to increased click through rate (CTR) due to increased relevance, a significant but harder to quantify benefit of the semantic-syntactic matching is that the resulting page has a unified feel and improves the user experience. In the Chevy Tahoe example above, the classifier would establish that the page is about cars/automotive and only those ads will be considered. Even if there are no ads for this particular Chevy model, the chosen ads will still be within the automotive domain. To implement our approach we need to solve a challenging problem: classify both pages and ads within a large taxonomy (so that the topic granularity would be small enough) with high precision (to reduce the probability of mis-match). We evaluated several classifiers and taxonomies and in this paper we present results using a taxonomy with close to 6000 nodes using a variation of the Rocchio's classifier [9]. This classifier gave the best results in both page and ad classification, and ultimately in ad relevance. The paper proceeds as follows. In the next section we review the basic principles of the contextual advertising. Section 3 overviews the related work. Section 4 describes the taxonomy and document classifier that were used for page and ad classification. Section 5 describes the semanticsyntactic method. In Section 6 we briefly discuss how to search efficiently the ad space in order to return the top-k ranked ads. Experimental evaluation is presented in Section 7. Finally, Section 8 presents the concluding remarks. 3. RELATED WORK Online advertising in general and contextual advertising in particular are emerging areas of research. The published literature is very sparse. A recent work by Ribeiro-Neto et. al. [8] examines a number of strategies to match pages to ads based on extracted keywords. The ads and pages are represented as vectors in a vector space. The first five strategies proposed in that work match the pages and the ads based on the cosine of the angle between the ad vector and the page vector. To find out the important part of the ad, the authors explore using different ad sections (bid phrase, title, body) as a basis for the ad vector. The winning strategy out of the first five requires the bid phrase to appear on the page and then ranks all such ads by the cosine of the union of all the ad sections and the page vectors. While both pages and ads are mapped to the same space, there is a discrepancy (impendence mismatch) between the vocabulary used in the ads and in the pages. Furthermore, since in the vector model the dimensions are determined by the number of unique words, plain cosine similarity will not take into account synonyms. To solve this problem, Ribeiro-Neto et al expand the page vocabulary with terms from other similar pages weighted based on the overall similarity of the origin page to the matched page, and show improved matching precision. In a follow-up work [7] the authors propose a method to learn impact of individual features using genetic programming to produce a matching function. The results show that genetic programming finds matching functions that significantly improve the matching compared to the best method (without page side expansion) reported in [8]. Another approach to contextual advertising is to reduce it to the problem of sponsored search advertising by extracting phrases from the page and matching them with the bid phrase of the ads. In [14] a system for phrase extraction is described that used a variety of features to determine the importance of page phrases for advertising purposes. The system is trained with pages that have been hand annotated with important phrases. The learning algorithm takes into account features based on tf-idf, html meta data and query logs to detect the most important phrases. During evaluation, each page phrase up to length 5 is considered as potential result and evaluated against a trained classifier. In our work we also experimented with a phrase extractor based on the work reported in [12]. While increasing slightly the precision, it did not change the relative performance of the explored algorithms. 8. CONCLUSION Contextual advertising is the economic engine behind a large number of non-transactional sites on the Web. Studies have shown that one of the main success factors for contextual ads is their relevance to the surrounding content. All existing commercial contextual match solutions known to us evolved from search advertising solutions whereby a search query is matched to the bid phrase of the ads. A natural extension of search advertising is to extract phrases from the page and match them to the bid phrase of the ads. However, individual phrases and words might have multiple meanings and/or be unrelated to the overall topic of the page leading to miss-matched ads. In this paper we proposed a novel way of matching advertisements to web pages that rely on a topical (semantic) match as a major component of the relevance score. The semantic match relies on the classification of pages and ads into a 6000 nodes commercial advertising taxonomy to determine their topical distance. As the classification relies on the full content of the page, it is more robust than individual page phrases. The semantic match is complemented with a syntactic match and the final score is a convex combination of the two sub-scores with the relative weight of each determined by a parameter α. We evaluated the semantic-syntactic approach against a syntactic approach over a set of pages with different contextual advertising. However in all cases the optimal value of α is between 0.25 and 0.9 indicating significant effect of the semantic score component. The effectiveness of the syntactic match depends on the quality of the pages used. In lower quality pages we are more likely to make classification errors that will then negatively impact the matching. We are currently examining how to employ machine learning algorithms to learn the optimal value of α based on a collection of features of the input pages.
H-90
Context-Sensitive Information Retrieval Using Implicit Feedback
A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored. In this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting. We propose several context-sensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents. We use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set. Experiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially.
[ "context", "implicit feedback inform", "clickthrough inform", "retriev accuraci", "current queri", "relev feedback", "interact retriev", "kl-diverg retriev model", "context-sensit languag", "long-term context", "short-term context", "fix coeffici interpol", "bayesian estim", "trec data set", "mean averag precis", "queri histori inform", "queri histori", "queri expans" ]
[ "P", "P", "P", "P", "P", "M", "R", "M", "R", "M", "M", "U", "U", "R", "U", "M", "M", "M" ]
Context-Sensitive Information Retrieval Using Implicit Feedback Xuehua Shen Department of Computer Science University of Illinois at Urbana-Champaign Bin Tan Department of Computer Science University of Illinois at Urbana-Champaign ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign ABSTRACT A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored. In this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting. We propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents. We use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set. Experiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models General Terms Algorithms 1. INTRODUCTION In most existing information retrieval models, the retrieval problem is treated as involving one single query and a set of documents. From a single query, however, the retrieval system can only have very limited clue about the user``s information need. An optimal retrieval system thus should try to exploit as much additional context information as possible to improve retrieval accuracy, whenever it is available. Indeed, context-sensitive retrieval has been identified as a major challenge in information retrieval research[2]. There are many kinds of context that we can exploit. Relevance feedback [14] can be considered as a way for a user to provide more context of search and is known to be effective for improving retrieval accuracy. However, relevance feedback requires that a user explicitly provides feedback information, such as specifying the category of the information need or marking a subset of retrieved documents as relevant documents. Since it forces the user to engage additional activities while the benefits are not always obvious to the user, a user is often reluctant to provide such feedback information. Thus the effectiveness of relevance feedback may be limited in real applications. For this reason, implicit feedback has attracted much attention recently [11, 13, 18, 17, 12]. In general, the retrieval results using the user``s initial query may not be satisfactory; often, the user would need to revise the query to improve the retrieval/ranking accuracy [8]. For a complex or difficult information need, the user may need to modify his/her query and view ranked documents with many iterations before the information need is completely satisfied. In such an interactive retrieval scenario, the information naturally available to the retrieval system is more than just the current user query and the document collection - in general, all the interaction history can be available to the retrieval system, including past queries, information about which documents the user has chosen to view, and even how a user has read a document (e.g., which part of a document the user spends a lot of time in reading). We define implicit feedback broadly as exploiting all such naturally available interaction history to improve retrieval results. A major advantage of implicit feedback is that we can improve the retrieval accuracy without requiring any user effort. For example, if the current query is java, without knowing any extra information, it would be impossible to know whether it is intended to mean the Java programming language or the Java island in Indonesia. As a result, the retrieved documents will likely have both kinds of documents - some may be about the programming language and some may be about the island. However, any particular user is unlikely searching for both types of documents. Such an ambiguity can be resolved by exploiting history information. For example, if we know that the previous query from the user is cgi programming, it would strongly suggest that it is the programming language that the user is searching for. Implicit feedback was studied in several previous works. In [11], Joachims explored how to capture and exploit the clickthrough information and demonstrated that such implicit feedback information can indeed improve the search accuracy for a group of people. In [18], a simulation study of the effectiveness of different implicit feedback algorithms was conducted, and several retrieval models designed for exploiting clickthrough information were proposed and evaluated. In [17], some existing retrieval algorithms are adapted to improve search results based on the browsing history of a user. Other related work on using context includes personalized search [1, 3, 4, 7, 10], query log analysis [5], context factors [12], and implicit queries [6]. While the previous work has mostly focused on using clickthrough information, in this paper, we use both clickthrough information and preceding queries, and focus on developing new context-sensitive language models for retrieval. Specifically, we develop models for using implicit feedback information such as query and clickthrough history of the current search session to improve retrieval accuracy. We use the KL-divergence retrieval model [19] as the basis and propose to treat context-sensitive retrieval as estimating a query language model based on the current query and any search context information. We propose several statistical language models to incorporate query and clickthrough history into the KL-divergence model. One challenge in studying implicit feedback models is that there does not exist any suitable test collection for evaluation. We thus use the TREC AP data to create a test collection with implicit feedback information, which can be used to quantitatively evaluate implicit feedback models. To the best of our knowledge, this is the first test set for implicit feedback. We evaluate the proposed models using this data set. The experimental results show that using implicit feedback information, especially the clickthrough data, can substantially improve retrieval performance without requiring additional effort from the user. The remaining sections are organized as follows. In Section 2, we attempt to define the problem of implicit feedback and introduce some terms that we will use later. In Section 3, we propose several implicit feedback models based on statistical language models. In Section 4, we describe how we create the data set for implicit feedback experiments. In Section 5, we evaluate different implicit feedback models on the created data set. Section 6 is our conclusions and future work. 2. PROBLEM DEFINITION There are two kinds of context information we can use for implicit feedback. One is short-term context, which is the immediate surrounding information which throws light on a user``s current information need in a single session. A session can be considered as a period consisting of all interactions for the same information need. The category of a user``s information need (e.g., kids or sports), previous queries, and recently viewed documents are all examples of short-term context. Such information is most directly related to the current information need of the user and thus can be expected to be most useful for improving the current search. In general, short-term context is most useful for improving search in the current session, but may not be so helpful for search activities in a different session. The other kind of context is long-term context, which refers to information such as a user``s education level and general interest, accumulated user query history and past user clickthrough information; such information is generally stable for a long time and is often accumulated over time. Long-term context can be applicable to all sessions, but may not be as effective as the short-term context in improving search accuracy for a particular session. In this paper, we focus on the short-term context, though some of our methods can also be used to naturally incorporate some long-term context. In a single search session, a user may interact with the search system several times. During interactions, the user would continuously modify the query. Therefore for the current query Qk (except for the first query of a search session) , there is a query history, HQ = (Q1, ..., Qk−1) associated with it, which consists of the preceding queries given by the same user in the current session. Note that we assume that the session boundaries are known in this paper. In practice, we need techniques to automatically discover session boundaries, which have been studied in [9, 16]. Traditionally, the retrieval system only uses the current query Qk to do retrieval. But the short-term query history clearly may provide useful clues about the user``s current information need as seen in the java example given in the previous section. Indeed, our previous work [15] has shown that the short-term query history is useful for improving retrieval accuracy. In addition to the query history, there may be other short-term context information available. For example, a user would presumably frequently click some documents to view. We refer to data associated with these actions as clickthrough history. The clickthrough data may include the title, summary, and perhaps also the content and location (e.g., the URL) of the clicked document. Although it is not clear whether a viewed document is actually relevant to the user``s information need, we may safely assume that the displayed summary/title information about the document is attractive to the user, thus conveys information about the user``s information need. Suppose we concatenate all the displayed text information about a document (usually title and summary) together, we will also have a clicked summary Ci in each round of retrieval. In general, we may have a history of clicked summaries C1, ..., Ck−1. We will also exploit such clickthrough history HC = (C1, ..., Ck−1) to improve our search accuracy for the current query Qk. Previous work has also shown positive results using similar clickthrough information [11, 17]. Both query history and clickthrough history are implicit feedback information, which naturally exists in interactive information retrieval, thus no additional user effort is needed to collect them. In this paper, we study how to exploit such information (HQ and HC ), develop models to incorporate the query history and clickthrough history into a retrieval ranking function, and quantitatively evaluate these models. 3. LANGUAGE MODELS FOR CONTEXTSENSITIVEINFORMATIONRETRIEVAL Intuitively, the query history HQ and clickthrough history HC are both useful for improving search accuracy for the current query Qk. An important research question is how we can exploit such information effectively. We propose to use statistical language models to model a user``s information need and develop four specific context-sensitive language models to incorporate context information into a basic retrieval model. 3.1 Basic retrieval model We use the Kullback-Leibler (KL) divergence method [19] as our basic retrieval method. According to this model, the retrieval task involves computing a query language model θQ for a given query and a document language model θD for a document and then computing their KL divergence D(θQ||θD), which serves as the score of the document. One advantage of this approach is that we can naturally incorporate the search context as additional evidence to improve our estimate of the query language model. Formally, let HQ = (Q1, ..., Qk−1) be the query history and the current query be Qk. Let HC = (C1, ..., Ck−1) be the clickthrough history. Note that Ci is the concatenation of all clicked documents'' summaries in the i-th round of retrieval since we may reasonably treat all these summaries equally. Our task is to estimate a context query model, which we denote by p(w|θk), based on the current query Qk, as well as the query history HQ and clickthrough history HC . We now describe several different language models for exploiting HQ and HC to estimate p(w|θk). We will use c(w, X) to denote the count of word w in text X, which could be either a query or a clicked document``s summary or any other text. We will use |X| to denote the length of text X or the total number of words in X. 3.2 Fixed Coefficient Interpolation (FixInt) Our first idea is to summarize the query history HQ with a unigram language model p(w|HQ) and the clickthrough history HC with another unigram language model p(w|HC ). Then we linearly interpolate these two history models to obtain the history model p(w|H). Finally, we interpolate the history model p(w|H) with the current query model p(w|Qk). These models are defined as follows. p(w|Qi) = c(w, Qi) |Qi| p(w|HQ) = 1 k − 1 i=k−1 i=1 p(w|Qi) p(w|Ci) = c(w, Ci) |Ci| p(w|HC ) = 1 k − 1 i=k−1 i=1 p(w|Ci) p(w|H) = βp(w|HC ) + (1 − β)p(w|HQ) p(w|θk) = αp(w|Qk) + (1 − α)p(w|H) where β ∈ [0, 1] is a parameter to control the weight on each history model, and where α ∈ [0, 1] is a parameter to control the weight on the current query and the history information. If we combine these equations, we see that p(w|θk) = αp(w|Qk) + (1 − α)[βp(w|HC ) + (1 − β)p(w|HQ)] That is, the estimated context query model is just a fixed coefficient interpolation of three models p(w|Qk), p(w|HQ), and p(w|HC ). 3.3 Bayesian Interpolation (BayesInt) One possible problem with the FixInt approach is that the coefficients, especially α, are fixed across all the queries. But intuitively, if our current query Qk is very long, we should trust the current query more, whereas if Qk has just one word, it may be beneficial to put more weight on the history. To capture this intuition, we treat p(w|HQ) and p(w|HC ) as Dirichlet priors and Qk as the observed data to estimate a context query model using Bayesian estimator. The estimated model is given by p(w|θk) = c(w, Qk) + µp(w|HQ) + νp(w|HC ) |Qk| + µ + ν = |Qk| |Qk| + µ + ν p(w|Qk)+ µ + ν |Qk| + µ + ν [ µ µ + ν p(w|HQ)+ ν µ + ν p(w|HC )] where µ is the prior sample size for p(w|HQ) and ν is the prior sample size for p(w|HC ). We see that the only difference between BayesInt and FixInt is the interpolation coefficients are now adaptive to the query length. Indeed, when viewing BayesInt as FixInt, we see that α = |Qk| |Qk|+µ+ν , β = ν ν+µ , thus with fixed µ and ν, we will have a query-dependent α. Later we will show that such an adaptive α empirically performs better than a fixed α. 3.4 Online Bayesian Updating (OnlineUp) Both FixInt and BayesInt summarize the history information by averaging the unigram language models estimated based on previous queries or clicked summaries. This means that all previous queries are treated equally and so are all clicked summaries. However, as the user interacts with the system and acquires more knowledge about the information in the collection, presumably, the reformulated queries will become better and better. Thus assigning decaying weights to the previous queries so as to trust a recent query more than an earlier query appears to be reasonable. Interestingly, if we incrementally update our belief about the user``s information need after seeing each query, we could naturally obtain decaying weights on the previous queries. Since such an incremental online updating strategy can be used to exploit any evidence in an interactive retrieval system, we present it in a more general way. In a typical retrieval system, the retrieval system responds to every new query entered by the user by presenting a ranked list of documents. In order to rank documents, the system must have some model for the user``s information need. In the KL divergence retrieval model, this means that the system must compute a query model whenever a user enters a (new) query. A principled way of updating the query model is to use Bayesian estimation, which we discuss below. 3.4.1 Bayesian updating We first discuss how we apply Bayesian estimation to update a query model in general. Let p(w|φ) be our current query model and T be a new piece of text evidence observed (e.g., T can be a query or a clicked summary). To update the query model based on T, we use φ to define a Dirichlet prior parameterized as Dir(µT p(w1|φ), ..., µT p(wN |φ)) where µT is the equivalent sample size of the prior. We use Dirichlet prior because it is a conjugate prior for multinomial distributions. With such a conjugate prior, the predictive distribution of φ (or equivalently, the mean of the posterior distribution of φ is given by p(w|φ) = c(w, T) + µT p(w|φ) |T| + µT (1) where c(w, T) is the count of w in T and |T| is the length of T. Parameter µT indicates our confidence in the prior expressed in terms of an equivalent text sample comparable with T. For example, µT = 1 indicates that the influence of the prior is equivalent to adding one extra word to T. 3.4.2 Sequential query model updating We now discuss how we can update our query model over time during an interactive retrieval process using Bayesian estimation. In general, we assume that the retrieval system maintains a current query model φi at any moment. As soon as we obtain some implicit feedback evidence in the form of a piece of text Ti, we will update the query model. Initially, before we see any user query, we may already have some information about the user. For example, we may have some information about what documents the user has viewed in the past. We use such information to define a prior on the query model, which is denoted by φ0. After we observe the first query Q1, we can update the query model based on the new observed data Q1. The updated query model φ1 can then be used for ranking documents in response to Q1. As the user views some documents, the displayed summary text for such documents C1 (i.e., clicked summaries) can serve as some new data for us to further update the query model to obtain φ1. As we obtain the second query Q2 from the user, we can update φ1 to obtain a new model φ2. In general, we may repeat such an updating process to iteratively update the query model. Clearly, we see two types of updating: (1) updating based on a new query Qi; (2) updating based on a new clicked summary Ci. In both cases, we can treat the current model as a prior of the context query model and treat the new observed query or clicked summary as observed data. Thus we have the following updating equations: p(w|φi) = c(w, Qi) + µip(w|φi−1) |Qi| + µi p(w|φi) = c(w, Ci) + νip(w|φi) |Ci| + νi where µi is the equivalent sample size for the prior when updating the model based on a query, while νi is the equivalent sample size for the prior when updating the model based on a clicked summary. If we set µi = 0 (or νi = 0) we essentially ignore the prior model, thus would start a completely new query model based on the query Qi (or the clicked summary Ci). On the other hand, if we set µi = +∞ (or νi = +∞) we essentially ignore the observed query (or the clicked summary) and do not update our model. Thus the model remains the same as if we do not observe any new text evidence. In general, the parameters µi and νi may have different values for different i. For example, at the very beginning, we may have very sparse query history, thus we could use a smaller µi, but later as the query history is richer, we can consider using a larger µi. But in our experiments, unless otherwise stated, we set them to the same constants, i.e., ∀i, j, µi = µj, νi = νj. Note that we can take either p(w|φi) or p(w|φi) as our context query model for ranking documents. This suggests that we do not have to wait until a user enters a new query to initiate a new round of retrieval; instead, as soon as we collect clicked summary Ci, we can update the query model and use p(w|φi) to immediately rerank any documents that a user has not yet seen. To score documents after seeing query Qk, we use p(w|φk), i.e., p(w|θk) = p(w|φk) 3.5 Batch Bayesian updating (BatchUp) If we set the equivalent sample size parameters to fixed constant, the OnlineUp algorithm would introduce a decaying factor - repeated interpolation would cause the early data to have a low weight. This may be appropriate for the query history as it is reasonable to believe that the user becomes better and better at query formulation as time goes on, but it is not necessarily appropriate for the clickthrough information, especially because we use the displayed summary, rather than the actual content of a clicked document. One way to avoid applying a decaying interpolation to the clickthrough data is to do OnlineUp only for the query history Q = (Q1, ..., Qi−1), but not for the clickthrough data C. We first buffer all the clickthrough data together and use the whole chunk of clickthrough data to update the model generated through running OnlineUp on previous queries. The updating equations are as follows. p(w|φi) = c(w, Qi) + µip(w|φi−1) |Qi| + µi p(w|ψi) = i−1 j=1 c(w, Cj) + νip(w|φi) i−1 j=1 |Cj| + νi where µi has the same interpretation as in OnlineUp, but νi now indicates to what extent we want to trust the clicked summaries. As in OnlineUp, we set all µi``s and νi``s to the same value. And to rank documents after seeing the current query Qk, we use p(w|θk) = p(w|ψk) 4. DATA COLLECTION In order to quantitatively evaluate our models, we need a data set which includes not only a text database and testing topics, but also query history and clickthrough history for each topic. Since there is no such data set available to us, we have to create one. There are two choices. One is to extract topics and any associated query history and clickthrough history for each topic from the log of a retrieval system (e.g., search engine). But the problem is that we have no relevance judgments on such data. The other choice is to use a TREC data set, which has a text database, topic description and relevance judgment file. Unfortunately, there are no query history and clickthrough history data. We decide to augment a TREC data set by collecting query history and clickthrough history data. We select TREC AP88, AP89 and AP90 data as our text database, because AP data has been used in several TREC tasks and has relatively complete judgments. There are altogether 242918 news articles and the average document length is 416 words. Most articles have titles. If not, we select the first sentence of the text as the title. For the preprocessing, we only do case folding and do not do stopword removal or stemming. We select 30 relatively difficult topics from TREC topics 1-150. These 30 topics have the worst average precision performance among TREC topics 1-150 according to some baseline experiments using the KL-Divergence model with Bayesian prior smoothing [20]. The reason why we select difficult topics is that the user then would have to have several interactions with the retrieval system in order to get satisfactory results so that we can expect to collect a relatively richer query history and clickthrough history data from the user. In real applications, we may also expect our models to be most useful for such difficult topics, so our data collection strategy reflects the real world applications well. We index the TREC AP data set and set up a search engine and web interface for TREC AP news articles. We use 3 subjects to do experiments to collect query history and clickthrough history data. Each subject is assigned 10 topics and given the topic descriptions provided by TREC. For each topic, the first query is the title of the topic given in the original TREC topic description. After the subject submits the query, the search engine will do retrieval and return a ranked list of search results to the subject. The subject will browse the results and maybe click one or more results to browse the full text of article(s). The subject may also modify the query to do another search. For each topic, the subject composes at least 4 queries. In our experiment, only the first 4 queries for each topic are used. The user needs to select the topic number from a selection menu before submitting the query to the search engine so that we can easily detect the session boundary, which is not the focus of our study. We use a relational database to store user interactions, including the submitted queries and clicked documents. For each query, we store the query terms and the associated result pages. And for each clicked document, we store the summary as shown on the search result page. The summary of the article is query dependent and is computed online using fixed-length passage retrieval (KL divergence model with Bayesian prior smoothing). Among 120 (4 for each of 30 topics) queries which we study in the experiment, the average query length is 3.71 words. Altogether there are 91 documents clicked to view. So on average, there are around 3 clicks per topic. The average length of clicked summary FixInt BayesInt OnlineUp BatchUp Query (α = 0.1, β = 1.0) (µ = 0.2, ν = 5.0) (µ = 5.0, ν = 15.0) (µ = 2.0, ν = 15.0) MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs q1 0.0095 0.0317 0.0095 0.0317 0.0095 0.0317 0.0095 0.0317 q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 q2 + HQ + HC 0.0324 0.1117 0.0345 0.1117 0.0215 0.0733 0.0342 0.1100 Improve. 3.8% -2.9% 10.6% -2.9% -31.1% -36.3% 9.6% -4.3% q3 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 q3 + HQ + HC 0.0726 0.1967 0.0816 0.2067 0.0706 0.1783 0.0810 0.2067 Improve 72.4% 32.6% 93.8% 39.4% 67.7% 20.2% 92.4% 39.4% q4 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 q4 + HQ + HC 0.0891 0.2233 0.0955 0.2317 0.0792 0.2067 0.0950 0.2250 Improve 66.2% 15.5% 78.2% 19.9% 47.8% 6.9% 77.2% 16.4% Table 1: Effect of using query history and clickthrough data for document ranking. is 34.4 words. Among 91 clicked documents, 29 documents are judged relevant according to TREC judgment file. This data set is publicly available 1 . 5. EXPERIMENTS 5.1 Experiment design Our major hypothesis is that using search context (i.e., query history and clickthrough information) can help improve search accuracy. In particular, the search context can provide extra information to help us estimate a better query model than using just the current query. So most of our experiments involve comparing the retrieval performance using the current query only (thus ignoring any context) with that using the current query as well as the search context. Since we collected four versions of queries for each topic, we make such comparisons for each version of queries. We use two performance measures: (1) Mean Average Precision (MAP): This is the standard non-interpolated average precision and serves as a good measure of the overall ranking accuracy. (2) Precision at 20 documents (pr@20docs): This measure does not average well, but it is more meaningful than MAP and reflects the utility for users who only read the top 20 documents. In all cases, the reported figure is the average over all of the 30 topics. We evaluate the four models for exploiting search context (i.e., FixInt, BayesInt, OnlineUp, and BatchUp). Each model has precisely two parameters (α and β for FixInt; µ and ν for others). Note that µ and ν may need to be interpreted differently for different methods. We vary these parameters and identify the optimal performance for each method. We also vary the parameters to study the sensitivity of our algorithms to the setting of the parameters. 5.2 Result analysis 5.2.1 Overall effect of search context We compare the optimal performances of four models with those using the current query only in Table 1. A row labeled with qi is the baseline performance and a row labeled with qi + HQ + HC is the performance of using search context. We can make several observations from this table: 1. Comparing the baseline performances indicates that on average reformulated queries are better than the previous queries with the performance of q4 being the best. Users generally formulate better and better queries. 2. Using search context generally has positive effect, especially when the context is rich. This can be seen from the fact that the 1 http://sifaka.cs.uiuc.edu/ir/ucair/QCHistory.zip improvement for q4 and q3 is generally more substantial compared with q2. Actually, in many cases with q2, using the context may hurt the performance, probably because the history at that point is sparse. When the search context is rich, the performance improvement can be quite substantial. For example, BatchUp achieves 92.4% improvement in the mean average precision over q3 and 77.2% improvement over q4. (The generally low precisions also make the relative improvement deceptively high, though.) 3. Among the four models using search context, the performances of FixInt and OnlineUp are clearly worse than those of BayesInt and BatchUp. Since BayesInt performs better than FixInt and the main difference between BayesInt and FixInt is that the former uses an adaptive coefficient for interpolation, the results suggest that using adaptive coefficient is quite beneficial and a Bayesian style interpolation makes sense. The main difference between OnlineUp and BatchUp is that OnlineUp uses decaying coefficients to combine the multiple clicked summaries, while BatchUp simply concatenates all clicked summaries. Therefore the fact that BatchUp is consistently better than OnlineUp indicates that the weights for combining the clicked summaries indeed should not be decaying. While OnlineUp is theoretically appealing, its performance is inferior to BayesInt and BatchUp, likely because of the decaying coefficient. Overall, BatchUp appears to be the best method when we vary the parameter settings. We have two different kinds of search context - query history and clickthrough data. We now look into the contribution of each kind of context. 5.2.2 Using query history only In each of four models, we can turn off the clickthrough history data by setting parameters appropriately. This allows us to evaluate the effect of using query history alone. We use the same parameter setting for query history as in Table 1. The results are shown in Table 2. Here we see that in general, the benefit of using query history is very limited with mixed results. This is different from what is reported in a previous study [15], where using query history is consistently helpful. Another observation is that the context runs perform poorly at q2, but generally perform (slightly) better than the baselines for q3 and q4. This is again likely because at the beginning the initial query, which is the title in the original TREC topic description, may not be a good query; indeed, on average, performances of these first-generation queries are clearly poorer than those of all other user-formulated queries in the later generations. Yet another observation is that when using query history only, the BayesInt model appears to be better than other models. Since the clickthrough data is ignored, OnlineUp and BatchUp FixInt BayesInt OnlineUp BatchUp Query (α = 0.1, β = 0) (µ = 0.2,ν = 0) (µ = 5.0,ν = +∞) (µ = 2.0, ν = +∞) MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 q2 + HQ 0.0097 0.0317 0.0311 0.1200 0.0213 0.0783 0.0287 0.0967 Improve. -68.9% -72.4% -0.3% 4.3% -31.7% -31.9% -8.0% -15.9% q3 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 q3 + HQ 0.0261 0.0917 0.0451 0.1517 0.0444 0.1333 0.0455 0.1450 Improve -38.2% -38.2% 7.1% 2.3% 5.5% -10.1% 8.1% -2.2% q4 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 q4 + HQ 0.0428 0.1467 0.0537 0.1917 0.0550 0.1733 0.0552 0.1917 Improve -20.1% -24.1% 0.2% -0.8% 3.0% -10.3% 3.0% -0.8% Table 2: Effect of using query history only for document ranking. µ 0 0.5 1 2 3 4 5 6 7 8 9 q2 + HQ MAP 0.0312 0.0313 0.0308 0.0287 0.0257 0.0231 0.0213 0.0194 0.0183 0.0182 0.0164 q3 + HQ MAP 0.0421 0.0442 0.0441 0.0455 0.0457 0.0458 0.0444 0.0439 0.0430 0.0390 0.0335 q4 + HQ MAP 0.0536 0.0546 0.0547 0.0552 0.0544 0.0548 0.0550 0.0541 0.0534 0.0525 0.0513 Table 3: Average Precision of BatchUp using query history only are essentially the same algorithm. The displayed results thus reflect the variation caused by parameter µ. A smaller setting of 2.0 is seen better than a larger value of 5.0. A more complete picture of the influence of the setting of µ can be seen from Table 3, where we show the performance figures for a wider range of values of µ. The value of µ can be interpreted as how many words we regard the query history is worth. A larger value thus puts more weight on the history and is seen to hurt the performance more when the history information is not rich. Thus while for q4 the best performance tends to be achieved for µ ∈ [2, 5], only when µ = 0.5 we see some small benefit for q2. As we would expect, an excessively large µ would hurt the performance in general, but q2 is hurt most and q4 is barely hurt, indicating that as we accumulate more and more query history information, we can put more and more weight on the history information. This also suggests that a better strategy should probably dynamically adjust parameters according to how much history information we have. The mixed query history results suggest that the positive effect of using implicit feedback information may have largely come from the use of clickthrough history, which is indeed true as we discuss in the next subsection. 5.2.3 Using clickthrough history only We now turn off the query history and only use the clicked summaries plus the current query. The results are shown in Table 4. We see that the benefit of using clickthrough information is much more significant than that of using query history. We see an overall positive effect, often with significant improvement over the baseline. It is also clear that the richer the context data is, the more improvement using clicked summaries can achieve. Other than some occasional degradation of precision at 20 documents, the improvement is fairly consistent and often quite substantial. These results show that the clicked summary text is in general quite useful for inferring a user``s information need. Intuitively, using the summary text, rather than the actual content of the document, makes more sense, as it is quite possible that the document behind a seemingly relevant summary is actually non-relevant. 29 out of the 91 clicked documents are relevant. Updating the query model based on such summaries would bring up the ranks of these relevant documents, causing performance improvement. However, such improvement is really not beneficial for the user as the user has already seen these relevant documents. To see how much improvement we have achieved on improving the ranks of the unseen relevant documents, we exclude these 29 relevant documents from our judgment file and recompute the performance of BayesInt and the baseline using the new judgment file. The results are shown in Table 5. Note that the performance of the baseline method is lower due to the removal of the 29 relevant documents, which would have been generally ranked high in the results. From Table 5, we see clearly that using clicked summaries also helps improve the ranks of unseen relevant documents significantly. Query BayesInt(µ = 0, ν = 5.0) MAP pr@20docs q2 0.0263 0.100 q2 + HC 0.0314 0.100 Improve. 19.4% 0% q3 0.0331 0.125 q3 + HC 0.0661 0.178 Improve 99.7% 42.4% q4 0.0442 0.165 q4 + HC 0.0739 0.188 Improve 67.2% 13.9% Table 5: BayesInt evaluated on unseen relevant documents One remaining question is whether the clickthrough data is still helpful if none of the clicked documents is relevant. To answer this question, we took out the 29 relevant summaries from our clickthrough history data HC to obtain a smaller set of clicked summaries HC , and re-evaluated the performance of the BayesInt method using HC with the same setting of parameters as in Table 4. The results are shown in Table 6. We see that although the improvement is not as substantial as in Table 4, the average precision is improved across all generations of queries. These results should be interpreted as very encouraging as they are based on only 62 non-relevant clickthroughs. In reality, a user would more likely click some relevant summaries, which would help bring up more relevant documents as we have seen in Table 4 and Table 5. FixInt BayesInt OnlineUp BatchUp Query (α = 0.1, β = 1) (µ = 0, ν = 5.0) (µk = 5.0, ν = 15, ∀i < k, µi = +∞) (µ = 0, ν = 15) MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 q2 + HC 0.0324 0.1117 0.0338 0.1133 0.0358 0.1300 0.0344 0.1167 Improve. 3.8% -2.9% 8.3% -1.5% 14.7% 13.0% 10.3% 1.5% q3 0.0421 0.1483 0.0421 0.1483 0.04210 0.1483 0.0420 0.1483 q3 + HC 0.0726 0.1967 0.0766 0.2033 0.0622 0.1767 0.0513 0.1650 Improve 72.4% 32.6% 81.9% 37.1% 47.7% 19.2% 21.9% 11.3% q4 0.0536 0.1930 0.0536 0.1930 0.0536 0.1930 0.0536 0.1930 q4 + HC 0.0891 0.2233 0.0925 0.2283 0.0772 0.2217 0.0623 0.2050 Improve 66.2% 15.5% 72.6% 18.1% 44.0% 14.7% 16.2% 6.1% Table 4: Effect of using clickthrough data only for document ranking. Query BayesInt(µ = 0, ν = 5.0) MAP pr@20docs q2 0.0312 0.1150 q2 + HC 0.0313 0.0950 Improve. 0.3% -17.4% q3 0.0421 0.1483 q3 + HC 0.0521 0.1820 Improve 23.8% 23.0% q4 0.0536 0.1930 q4 + HC 0.0620 0.1850 Improve 15.7% -4.1% Table 6: Effect of using only non-relevant clickthrough data 5.2.4 Additive effect of context information By comparing the results across Table 1, Table 2 and Table 4, we can see that the benefit of the query history information and that of clickthrough information are mostly additive, i.e., combining them can achieve better performance than using each alone, but most improvement has clearly come from the clickthrough information. In Table 7, we show this effect for the BatchUp method. 5.2.5 Parameter sensitivity All four models have two parameters to control the relative weights of HQ, HC , and Qk, though the parameterization is different from model to model. In this subsection, we study the parameter sensitivity for BatchUp, which appears to perform relatively better than others. BatchUp has two parameters µ and ν. We first look at µ. When µ is set to 0, the query history is not used at all, and we essentially just use the clickthrough data combined with the current query. If we increase µ, we will gradually incorporate more information from the previous queries. In Table 8, we show how the average precision of BatchUp changes as we vary µ with ν fixed to 15.0, where the best performance of BatchUp is achieved. We see that the performance is mostly insensitive to the change of µ for q3 and q4, but is decreasing as µ increases for q2. The pattern is also similar when we set ν to other values. In addition to the fact that q1 is generally worse than q2, q3, and q4, another possible reason why the sensitivity is lower for q3 and q4 may be that we generally have more clickthrough data available for q3 and q4 than for q2, and the dominating influence of the clickthrough data has made the small differences caused by µ less visible for q3 and q4. The best performance is generally achieved when µ is around 2.0, which means that the past query information is as useful as about 2 words in the current query. Except for q2, there is clearly some tradeoff between the current query and the previous queries Query MAP pr@20docs q2 0.0312 0.1150 q2 + HQ 0.0287 0.0967 Improve. -8.0% -15.9% q2 + HC 0.0344 0.1167 Improve. 10.3% 1.5% q2 + HQ + HC 0.0342 0.1100 Improve. 9.6% -4.3% q3 0.0421 0.1483 q3 + HQ 0.0455 0.1450 Improve 8.1% -2.2% q3 + HC 0.0513 0.1650 Improve 21.9% 11.3% q3 + HQ + HC 0.0810 0.2067 Improve 92.4% 39.4% q4 0.0536 0.1930 q4 + HQ 0.0552 0.1917 Improve 3.0% -0.8% q4 + HC 0.0623 0.2050 Improve 16.2% 6.1% q4 + HQ + HC 0.0950 0.2250 Improve 77.2% 16.4% Table 7: Additive benefit of context information and using a balanced combination of them achieves better performance than using each of them alone. We now turn to the other parameter ν. When ν is set to 0, we only use the clickthrough data; When ν is set to +∞, we only use the query history and the current query. With µ set to 2.0, where the best performance of BatchUp is achieved, we vary ν and show the results in Table 9. We see that the performance is also not very sensitive when ν ≤ 30, with the best performance often achieved at ν = 15. This means that the combined information of query history and the current query is as useful as about 15 words in the clickthrough data, indicating that the clickthrough information is highly valuable. Overall, these sensitivity results show that BatchUp not only performs better than other methods, but also is quite robust. 6. CONCLUSIONS AND FUTURE WORK In this paper, we have explored how to exploit implicit feedback information, including query history and clickthrough history within the same search session, to improve information retrieval performance. Using the KL-divergence retrieval model as the basis, we proposed and studied four statistical language models for context-sensitive information retrieval, i.e., FixInt, BayesInt, OnlineUp and BatchUp. We use TREC AP Data to create a test set µ 0 1 2 3 4 5 6 7 8 9 10 MAP 0.0386 0.0366 0.0342 0.0315 0.0290 0.0267 0.0250 0.0236 0.0229 0.0223 0.0219 q2 + HQ + HC pr@20 0.1333 0.1233 0.1100 0.1033 0.1017 0.0933 0.0833 0.0767 0.0783 0.0767 0.0750 MAP 0.0805 0.0807 0.0811 0.0814 0.0813 0.0808 0.0804 0.0799 0.0795 0.0790 0.0788 q3 + HQ + HC pr@20 0.210 0.2150 0.2067 0.205 0.2067 0.205 0.2067 0.2067 0.2050 0.2017 0.2000 MAP 0.0929 0.0947 0.0950 0.0940 0.0941 0.0940 0.0942 0.0937 0.0936 0.0932 0.0929 q4 + HQ + HC pr@20 0.2183 0.2217 0.2250 0.2217 0.2233 0.2267 0.2283 0.2333 0.2333 0.2350 0.2333 Table 8: Sensitivity of µ in BatchUp ν 0 1 2 5 10 15 30 100 300 500 MAP 0.0278 0.0287 0.0296 0.0315 0.0334 0.0342 0.0328 0.0311 0.0296 0.0290 q2 + HQ + HC pr@20 0.0933 0.0950 0.0950 0.1000 0.1050 0.1100 0.1150 0.0983 0.0967 0.0967 MAP 0.0728 0.0739 0.0751 0.0786 0.0809 0.0811 0.0770 0.0634 0.0511 0.0491 q3 + HQ + HC pr@20 0.1917 0.1933 0.1950 0.2100 0.2000 0.2067 0.2017 0.1783 0.1600 0.1550 MAP 0.0895 0.0903 0.0914 0.0932 0.0944 0.0950 0.0919 0.0761 0.0664 0.0625 q4 + HQ + HC pr@20 0.2267 0.2233 0.2283 0.2317 0.2233 0.2250 0.2283 0.2200 0.2067 0.2033 Table 9: Sensitivity of ν in BatchUp for evaluating implicit feedback models. Experiment results show that using implicit feedback, especially clickthrough history, can substantially improve retrieval performance without requiring any additional user effort. The current work can be extended in several ways: First, we have only explored some very simple language models for incorporating implicit feedback information. It would be interesting to develop more sophisticated models to better exploit query history and clickthrough history. For example, we may treat a clicked summary differently depending on whether the current query is a generalization or refinement of the previous query. Second, the proposed models can be implemented in any practical systems. We are currently developing a client-side personalized search agent, which will incorporate some of the proposed algorithms. We will also do a user study to evaluate effectiveness of these models in the real web search. Finally, we should further study a general retrieval framework for sequential decision making in interactive information retrieval and study how to optimize some of the parameters in the context-sensitive retrieval models. 7. ACKNOWLEDGMENTS This material is based in part upon work supported by the National Science Foundation under award numbers IIS-0347933 and IIS-0428472. We thank the anonymous reviewers for their useful comments. 8. REFERENCES [1] E. Adar and D. Karger. Haystack: Per-user information environments. In Proceedings of CIKM 1999, 1999. [2] J. Allan and et al.. Challenges in information retrieval and language modeling. Workshop at University of Amherst, 2002. [3] K. Bharat. Searchpad: Explicit capture of search context to support web search. In Proceeding of WWW 2000, 2000. [4] W. B. Croft, S. Cronen-Townsend, and V. Larvrenko. Relevance feedback and personalization: A language modeling perspective. In Proeedings of Second DELOS Workshop: Personalisation and Recommender Systems in Digital Libraries, 2001. [5] H. Cui, J.-R. Wen, J.-Y. Nie, and W.-Y. Ma. Probabilistic query expansion using query logs. In Proceedings of WWW 2002, 2002. [6] S. T. Dumais, E. Cutrell, R. Sarin, and E. Horvitz. Implicit queries (IQ) for contextualized search (demo description). In Proceedings of SIGIR 2004, page 594, 2004. [7] L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. Placing search in context: The concept revisited. In Proceedings of WWW 2002, 2001. [8] C. Huang, L. Chien, and Y. Oyang. Query session based term suggestion for interactive web search. In Proceedings of WWW 2001, 2001. [9] X. Huang, F. Peng, A. An, and D. Schuurmans. Dynamic web log session identification with statistical language models. Journal of the American Society for Information Science and Technology, 55(14):1290-1303, 2004. [10] G. Jeh and J. Widom. Scaling personalized web search. In Proceeding of WWW 2003, 2003. [11] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of SIGKDD 2002, 2002. [12] D. Kelly and N. J. Belkin. Display time as implicit feedback: Understanding task effects. In Proceedings of SIGIR 2004, 2004. [13] D. Kelly and J. Teevan. Implicit feedback for inferring user preference. SIGIR Forum, 32(2), 2003. [14] J. Rocchio. Relevance feedback information retrieval. In The Smart Retrieval System-Experiments in Automatic Document Processing, pages 313-323, Kansas City, MO, 1971. Prentice-Hall. [15] X. Shen and C. Zhai. Exploiting query history for document ranking in interactive information retrieval (poster). In Proceedings of SIGIR 2003, 2003. [16] S. Sriram, X. Shen, and C. Zhai. A session-based search engine (poster). In Proceedings of SIGIR 2004, 2004. [17] K. Sugiyama, K. Hatano, and M. Yoshikawa. Adaptive web search based on user profile constructed without any effort from users. In Proceedings of WWW 2004, 2004. [18] R. W. White, J. M. Jose, C. J. van Rijsbergen, and I. Ruthven. A simulated study of implicit feedback models. In Proceedings of ECIR 2004, pages 311-326, 2004. [19] C. Zhai and J. Lafferty. Model-based feedback in the KL-divergence retrieval model. In Proceedings of CIKM 2001, 2001. [20] C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to ad-hoc information retrieval. In Proceedings of SIGIR 2001, 2001.
Context-Sensitive Information Retrieval Using Implicit Feedback ABSTRACT A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored. In this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting. We propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents. We use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set. Experiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially. 1. INTRODUCTION In most existing information retrieval models, the retrieval problem is treated as involving one single query and a set of documents. From a single query, however, the retrieval system can only have very limited clue about the user's information need. An optimal retrieval system thus should try to exploit as much additional context information as possible to improve retrieval accuracy, whenever it is available. Indeed, context-sensitive retrieval has been identified as a major challenge in information retrieval research [2]. There are many kinds of context that we can exploit. Relevance feedback [14] can be considered as a way for a user to provide more context of search and is known to be effective for improving retrieval accuracy. However, relevance feedback requires that a user explicitly provides feedback information, such as specifying the category of the information need or marking a subset of retrieved documents as relevant documents. Since it forces the user to engage additional activities while the benefits are not always obvious to the user, a user is often reluctant to provide such feedback information. Thus the effectiveness of relevance feedback may be limited in real applications. For this reason, implicitfeedback has attracted much attention recently [11, 13, 18, 17, 12]. In general, the retrieval results using the user's initial query may not be satisfactory; often, the user would need to revise the query to improve the retrieval/ranking accuracy [8]. For a complex or difficult information need, the user may need to modify his/her query and view ranked documents with many iterations before the information need is completely satisfied. In such an interactive retrieval scenario, the information naturally available to the retrieval system is more than just the current user query and the document collection--in general, all the interaction history can be available to the retrieval system, including past queries, information about which documents the user has chosen to view, and even how a user has read a document (e.g., which part of a document the user spends a lot of time in reading). We define implicit feedback broadly as exploiting all such naturally available interaction history to improve retrieval results. A major advantage of implicit feedback is that we can improve the retrieval accuracy without requiring any user effort. For example, if the current query is "java", without knowing any extra information, it would be impossible to know whether it is intended to mean the Java programming language or the Java island in Indonesia. As a result, the retrieved documents will likely have both kinds of documents--some may be about the programming language and some may be about the island. However, any particular user is unlikely searching for both types of documents. Such an ambiguity can be resolved by exploiting history information. For example, if we know that the previous query from the user is "cgi programming", it would strongly suggest that it is the programming language that the user is searching for. Implicit feedback was studied in several previous works. In [11], Joachims explored how to capture and exploit the clickthrough information and demonstrated that such implicit feedback information can indeed improve the search accuracy for a group of people. In [18], a simulation study of the effectiveness of different implicit feedback algorithms was conducted, and several retrieval models designed for exploiting clickthrough information were pro posed and evaluated. In [17], some existing retrieval algorithms are adapted to improve search results based on the browsing history of a user. Other related work on using context includes personalized search [1, 3, 4, 7, 10], query log analysis [5], context factors [12], and implicit queries [6]. While the previous work has mostly focused on using clickthrough information, in this paper, we use both clickthrough information and preceding queries, and focus on developing new context-sensitive language models for retrieval. Specifically, we develop models for using implicit feedback information such as query and clickthrough history of the current search session to improve retrieval accuracy. We use the KL-divergence retrieval model [19] as the basis and propose to treat context-sensitive retrieval as estimating a query language model based on the current query and any search context information. We propose several statistical language models to incorporate query and clickthrough history into the KL-divergence model. One challenge in studying implicit feedback models is that there does not exist any suitable test collection for evaluation. We thus use the TREC AP data to create a test collection with implicit feedback information, which can be used to quantitatively evaluate implicit feedback models. To the best of our knowledge, this is the first test set for implicit feedback. We evaluate the proposed models using this data set. The experimental results show that using implicit feedback information, especially the clickthrough data, can substantially improve retrieval performance without requiring additional effort from the user. The remaining sections are organized as follows. In Section 2, we attempt to define the problem of implicit feedback and introduce some terms that we will use later. In Section 3, we propose several implicit feedback models based on statistical language models. In Section 4, we describe how we create the data set for implicit feedback experiments. In Section 5, we evaluate different implicit feedback models on the created data set. Section 6 is our conclusions and future work. 2. PROBLEM DEFINITION There are two kinds of context information we can use for implicit feedback. One is short-term context, which is the immediate surrounding information which throws light on a user's current information need in a single session. A session can be considered as a period consisting of all interactions for the same information need. The category of a user's information need (e.g., kids or sports), previous queries, and recently viewed documents are all examples of short-term context. Such information is most directly related to the current information need of the user and thus can be expected to be most useful for improving the current search. In general, short-term context is most useful for improving search in the current session, but may not be so helpful for search activities in a different session. The other kind of context is long-term context, which refers to information such as a user's education level and general interest, accumulated user query history and past user clickthrough information; such information is generally stable for a long time and is often accumulated over time. Long-term context can be applicable to all sessions, but may not be as effective as the short-term context in improving search accuracy for a particular session. In this paper, we focus on the short-term context, though some of our methods can also be used to naturally incorporate some long-term context. In a single search session, a user may interact with the search system several times. During interactions, the user would continuously modify the query. Therefore for the current query Qk (except for the first query of a search session), there is a query history, HQ = (Q1,..., Qk − 1) associated with it, which consists of the preceding queries given by the same user in the current session. Note that we assume that the session boundaries are known in this paper. In practice, we need techniques to automatically discover session boundaries, which have been studied in [9, 16]. Traditionally, the retrieval system only uses the current query Qk to do retrieval. But the short-term query history clearly may provide useful clues about the user's current information need as seen in the "java" example given in the previous section. Indeed, our previous work [15] has shown that the short-term query history is useful for improving retrieval accuracy. In addition to the query history, there may be other short-term context information available. For example, a user would presumably frequently click some documents to view. We refer to data associated with these actions as clickthrough history. The clickthrough data may include the title, summary, and perhaps also the content and location (e.g., the URL) of the clicked document. Although it is not clear whether a viewed document is actually relevant to the user's information need, we may "safely" assume that the displayed summary/title information about the document is attractive to the user, thus conveys information about the user's information need. Suppose we concatenate all the displayed text information about a document (usually title and summary) together, we will also have a clicked summary Ci in each round of retrieval. In general, we may have a history of clicked summaries C1,..., Ck − 1. We will also exploit such clickthrough history HC = (C1,..., Ck − 1) to improve our search accuracy for the current query Qk. Previous work has also shown positive results using similar clickthrough information [11, 17]. Both query history and clickthrough history are implicit feedback information, which naturally exists in interactive information retrieval, thus no additional user effort is needed to collect them. In this paper, we study how to exploit such information (HQ and HC), develop models to incorporate the query history and clickthrough history into a retrieval ranking function, and quantitatively evaluate these models. 3. LANGUAGE MODELS FOR CONTEXTSENSITIVE INFORMATION RETRIEVAL Intuitively, the query history HQ and clickthrough history HC are both useful for improving search accuracy for the current query Qk. An important research question is how we can exploit such information effectively. We propose to use statistical language models to model a user's information need and develop four specific context-sensitive language models to incorporate context information into a basic retrieval model. 3.1 Basic retrieval model We use the Kullback-Leibler (KL) divergence method [19] as our basic retrieval method. According to this model, the retrieval task involves computing a query language model BQ for a given query and a document language model BD for a document and then computing their KL divergence D (BQ | | OD), which serves as the score of the document. One advantage of this approach is that we can naturally incorporate the search context as additional evidence to improve our estimate of the query language model. Formally, let HQ = (Q1,..., Qk − 1) be the query history and the current query be Qk. Let HC = (C1,..., Ck − 1) be the clickthrough history. Note that Ci is the concatenation of all clicked documents' summaries in the i-th round of retrieval since we may reasonably treat all these summaries equally. Our task is to estimate a context query model, which we denote by p (w | 0k), based on the current query Qk, as well as the query history HQ and clickthrough history HC. We now describe several different language models for exploiting HQ and HC to estimate p (w | k). We will use c (w, X) to denote the count of word w in text X, which could be either a query or a clicked document's summary or any other text. We will use | X | to denote the length of text X or the total number of words in X. 3.2 Fixed Coefficient Interpolation (FixInt) Our first idea is to summarize the query history HQ with a unigram language model p (w | HQ) and the clickthrough history HC with another unigram language model p (w | HC). Then we linearly interpolate these two history models to obtain the history model p (w | H). Finally, we interpolate the history model p (w | H) with the current query model p (w | Qk). These models are defined as follows. where [0, 1] is a parameter to control the weight on each history model, and where [0, 1] is a parameter to control the weight on the current query and the history information. If we combine these equations, we see that That is, the estimated context query model is just a fixed coefficient interpolation of three models p (w | Qk), p (w | HQ), and p (w | HC). 3.3 Bayesian Interpolation (BayesInt) One possible problem with the FixInt approach is that the coefficients, especially, are fixed across all the queries. But intuitively, if our current query Qk is very long, we should trust the current query more, whereas if Qk has just one word, it may be beneficial to put more weight on the history. To capture this intuition, we treat p (w | HQ) and p (w | HC) as Dirichlet priors and Qk as the observed data to estimate a context query model using Bayesian estimator. The estimated model is given by where µ is the prior sample size for p (w | HQ) and is the prior sample size for p (w | HC). We see that the only difference between BayesInt and FixInt is the interpolation coefficients are now adaptive to the query length. Indeed, when viewing BayesInt as FixInt, we see that = | Qk | | Qk | + µ +, = + µ, thus with fixed µ and, we will have a query-dependent. Later we will show that such an adaptive empirically performs better than a fixed. 3.4 Online Bayesian Updating (OnlineUp) Both FixInt and BayesInt summarize the history information by averaging the unigram language models estimated based on previous queries or clicked summaries. This means that all previous queries are treated equally and so are all clicked summaries. However, as the user interacts with the system and acquires more knowledge about the information in the collection, presumably, the reformulated queries will become better and better. Thus assigning decaying weights to the previous queries so as to trust a recent query more than an earlier query appears to be reasonable. Interestingly, if we incrementally update our belief about the user's information need after seeing each query, we could naturally obtain decaying weights on the previous queries. Since such an incremental online updating strategy can be used to exploit any evidence in an interactive retrieval system, we present it in a more general way. In a typical retrieval system, the retrieval system responds to every new query entered by the user by presenting a ranked list of documents. In order to rank documents, the system must have some model for the user's information need. In the KL divergence retrieval model, this means that the system must compute a query model whenever a user enters a (new) query. A principled way of updating the query model is to use Bayesian estimation, which we discuss below. 3.4.1 Bayesian updating We first discuss how we apply Bayesian estimation to update a query model in general. Let p (w |) be our current query model and T be a new piece of text evidence observed (e.g., T can be a query or a clicked summary). To update the query model based on T, we use to define a Dirichlet prior parameterized as where µT is the equivalent sample size of the prior. We use Dirichlet prior because it is a conjugate prior for multinomial distributions. With such a conjugate prior, the predictive distribution of (or equivalently, the mean of the posterior distribution of is given by where c (w, T) is the count of w in T and | T | is the length of T. Parameter µT indicates our confidence in the prior expressed in terms of an equivalent text sample comparable with T. For example, µT = 1 indicates that the influence of the prior is equivalent to adding one extra word to T. 3.4.2 Sequential query model updating We now discuss how we can update our query model over time during an interactive retrieval process using Bayesian estimation. In general, we assume that the retrieval system maintains a current query model i at any moment. As soon as we obtain some implicit feedback evidence in the form of a piece of text Ti, we will update the query model. Initially, before we see any user query, we may already have some information about the user. For example, we may have some information about what documents the user has viewed in the past. We use such information to define a prior on the query model, which is denoted by 0. After we observe the first query Q1, we can update the query model based on the new observed data Q1. The updated query model 1 can then be used for ranking documents in response to Q1. As the user views some documents, the displayed summary text for such documents C1 (i.e., clicked summaries) can serve as some new data for us to further update the query model to obtain 1. As we obtain the second query Q2 from the user, we can update 1 to obtain a new model 2. In general, we may repeat such an updating process to iteratively update the query model. Clearly, we see two types of updating: (1) updating based on a new query Qi; (2) updating based on a new clicked summary Ci. In both cases, we can treat the current model as a prior of the context query model and treat the new observed query or clicked summary as observed data. Thus we have the following updating equations: where µi is the equivalent sample size for the prior when updating the model based on a query, while i is the equivalent sample size for the prior when updating the model based on a clicked summary. If we set µi = 0 (or i = 0) we essentially ignore the prior model, thus would start a completely new query model based on the query Qi (or the clicked summary Ci). On the other hand, if we set µi = + oo (or i = + oo) we essentially ignore the observed query (or the clicked summary) and do not update our model. Thus the model remains the same as if we do not observe any new text evidence. In general, the parameters µi and i may have different values for different i. For example, at the very beginning, we may have very sparse query history, thus we could use a smaller µi, but later as the query history is richer, we can consider using a larger µi. But in our experiments, unless otherwise stated, we set them to the same constants, i.e., ` di, j, µi = µj, i = j. Note that we can take either p (w1 i) or p (w1 i) as our context query model for ranking documents. This suggests that we do not have to wait until a user enters a new query to initiate a new round of retrieval; instead, as soon as we collect clicked summary Ci, we can update the query model and use p (w1 i) to immediately rerank any documents that a user has not yet seen. To score documents after seeing query Qk, we use p (w1 k), i.e., 3.5 Batch Bayesian updating (BatchUp) If we set the equivalent sample size parameters to fixed constant, the OnlineUp algorithm would introduce a decaying factor--repeated interpolation would cause the early data to have a low weight. This may be appropriate for the query history as it is reasonable to believe that the user becomes better and better at query formulation as time goes on, but it is not necessarily appropriate for the clickthrough information, especially because we use the displayed summary, rather than the actual content of a clicked document. One way to avoid applying a decaying interpolation to the clickthrough data is to do OnlineUp only for the query history Q = (Q1,..., Qi-1), but not for the clickthrough data C. We first buffer all the clickthrough data together and use the whole chunk of clickthrough data to update the model generated through running OnlineUp on previous queries. The updating equations are as follows. where µi has the same interpretation as in OnlineUp, but i now indicates to what extent we want to trust the clicked summaries. As in OnlineUp, we set all µi's and i's to the same value. And to rank documents after seeing the current query Qk, we use 4. DATA COLLECTION In order to quantitatively evaluate our models, we need a data set which includes not only a text database and testing topics, but also query history and clickthrough history for each topic. Since there is no such data set available to us, we have to create one. There are two choices. One is to extract topics and any associated query history and clickthrough history for each topic from the log of a retrieval system (e.g., search engine). But the problem is that we have no relevance judgments on such data. The other choice is to use a TREC data set, which has a text database, topic description and relevance judgment file. Unfortunately, there are no query history and clickthrough history data. We decide to augment a TREC data set by collecting query history and clickthrough history data. We select TREC AP88, AP89 and AP90 data as our text database, because AP data has been used in several TREC tasks and has relatively complete judgments. There are altogether 242918 news articles and the average document length is 416 words. Most articles have titles. If not, we select the first sentence of the text as the title. For the preprocessing, we only do case folding and do not do stopword removal or stemming. We select 30 relatively difficult topics from TREC topics 1-150. These 30 topics have the worst average precision performance among TREC topics 1-150 according to some baseline experiments using the KL-Divergence model with Bayesian prior smoothing [20]. The reason why we select difficult topics is that the user then would have to have several interactions with the retrieval system in order to get satisfactory results so that we can expect to collect a relatively richer query history and clickthrough history data from the user. In real applications, we may also expect our models to be most useful for such difficult topics, so our data collection strategy reflects the real world applications well. We index the TREC AP data set and set up a search engine and web interface for TREC AP news articles. We use 3 subjects to do experiments to collect query history and clickthrough history data. Each subject is assigned 10 topics and given the topic descriptions provided by TREC. For each topic, the first query is the title of the topic given in the original TREC topic description. After the subject submits the query, the search engine will do retrieval and return a ranked list of search results to the subject. The subject will browse the results and maybe click one or more results to browse the full text of article (s). The subject may also modify the query to do another search. For each topic, the subject composes at least 4 queries. In our experiment, only the first 4 queries for each topic are used. The user needs to select the topic number from a selection menu before submitting the query to the search engine so that we can easily detect the session boundary, which is not the focus of our study. We use a relational database to store user interactions, including the submitted queries and clicked documents. For each query, we store the query terms and the associated result pages. And for each clicked document, we store the summary as shown on the search result page. The summary of the article is query dependent and is computed online using fixed-length passage retrieval (KL divergence model with Bayesian prior smoothing). Among 120 (4 for each of 30 topics) queries which we study in the experiment, the average query length is 3.71 words. Altogether there are 91 documents clicked to view. So on average, there are around 3 clicks per topic. The average length of clicked summary Table 1: Effect of using query history and clickthrough data for document ranking. is 34.4 words. Among 91 clicked documents, 29 documents are judged relevant according to TREC judgment file. This data set is publicly available 1. 5. EXPERIMENTS 5.1 Experiment design Our major hypothesis is that using search context (i.e., query history and clickthrough information) can help improve search accuracy. In particular, the search context can provide extra information to help us estimate a better query model than using just the current query. So most of our experiments involve comparing the retrieval performance using the current query only (thus ignoring any context) with that using the current query as well as the search context. Since we collected four versions of queries for each topic, we make such comparisons for each version of queries. We use two performance measures: (1) Mean Average Precision (MAP): This is the standard non-interpolated average precision and serves as a good measure of the overall ranking accuracy. (2) Precision at 20 documents (pr@20docs): This measure does not average well, but it is more meaningful than MAP and reflects the utility for users who only read the top 20 documents. In all cases, the reported figure is the average over all of the 30 topics. We evaluate the four models for exploiting search context (i.e., FixInt, BayesInt, OnlineUp, and BatchUp). Each model has precisely two parameters (a and Q for FixInt; µ and v for others). Note that µ and v may need to be interpreted differently for different methods. We vary these parameters and identify the optimal performance for each method. We also vary the parameters to study the sensitivity of our algorithms to the setting of the parameters. 5.2 Result analysis 5.2.1 Overall effect of search context We compare the optimal performances of four models with those using the current query only in Table 1. A row labeled with qi is the baseline performance and a row labeled with qi + HQ + HC is the performance of using search context. We can make several observations from this table: 1. Comparing the baseline performances indicates that on average reformulated queries are better than the previous queries with the performance of q4 being the best. Users generally formulate better and better queries. 2. Using search context generally has positive effect, especially when the context is rich. This can be seen from the fact that the improvement for q4 and q3 is generally more substantial compared with q2. Actually, in many cases with q2, using the context may hurt the performance, probably because the history at that point is sparse. When the search context is rich, the performance improvement can be quite substantial. For example, BatchUp achieves 92.4% improvement in the mean average precision over q3 and 77.2% improvement over q4. (The generally low precisions also make the relative improvement deceptively high, though.) 3. Among the four models using search context, the performances of FixInt and OnlineUp are clearly worse than those of BayesInt and BatchUp. Since BayesInt performs better than FixInt and the main difference between BayesInt and FixInt is that the former uses an adaptive coefficient for interpolation, the results suggest that using adaptive coefficient is quite beneficial and a Bayesian style interpolation makes sense. The main difference between OnlineUp and BatchUp is that OnlineUp uses decaying coefficients to combine the multiple clicked summaries, while BatchUp simply concatenates all clicked summaries. Therefore the fact that BatchUp is consistently better than OnlineUp indicates that the weights for combining the clicked summaries indeed should not be decaying. While OnlineUp is theoretically appealing, its performance is inferior to BayesInt and BatchUp, likely because of the decaying coefficient. Overall, BatchUp appears to be the best method when we vary the parameter settings. We have two different kinds of search context--query history and clickthrough data. We now look into the contribution of each kind of context. 5.2.2 Using query history only In each of four models, we can "turn off" the clickthrough history data by setting parameters appropriately. This allows us to evaluate the effect of using query history alone. We use the same parameter setting for query history as in Table 1. The results are shown in Table 2. Here we see that in general, the benefit of using query history is very limited with mixed results. This is different from what is reported in a previous study [15], where using query history is consistently helpful. Another observation is that the context runs perform poorly at q2, but generally perform (slightly) better than the baselines for q3 and q4. This is again likely because at the beginning the initial query, which is the title in the original TREC topic description, may not be a good query; indeed, on average, performances of these "first-generation" queries are clearly poorer than those of all other user-formulated queries in the later generations. Yet another observation is that when using query history only, the BayesInt model appears to be better than other models. Since the clickthrough data is ignored, OnlineUp and BatchUp Table 2: Effect of using query history only for document ranking. Table 3: Average Precision of BatchUp using query history only are essentially the same algorithm. The displayed results thus reflect the variation caused by parameter µ. A smaller setting of 2.0 is seen better than a larger value of 5.0. A more complete picture of the influence of the setting of µ can be seen from Table 3, where we show the performance figures for a wider range of values of µ. The value of µ can be interpreted as how many words we regard the query history is worth. A larger value thus puts more weight on the history and is seen to hurt the performance more when the history information is not rich. Thus while for q4 the best performance tends to be achieved for µ E [2, 5], only when µ = 0.5 we see some small benefit for q2. As we would expect, an excessively large µ would hurt the performance in general, but q2 is hurt most and q4 is barely hurt, indicating that as we accumulate more and more query history information, we can put more and more weight on the history information. This also suggests that a better strategy should probably dynamically adjust parameters according to how much history information we have. The mixed query history results suggest that the positive effect of using implicit feedback information may have largely come from the use of clickthrough history, which is indeed true as we discuss in the next subsection. 5.2.3 Using clickthrough history only We now turn off the query history and only use the clicked summaries plus the current query. The results are shown in Table 4. We see that the benefit of using clickthrough information is much more significant than that of using query history. We see an overall positive effect, often with significant improvement over the baseline. It is also clear that the richer the context data is, the more improvement using clicked summaries can achieve. Other than some occasional degradation of precision at 20 documents, the improvement is fairly consistent and often quite substantial. These results show that the clicked summary text is in general quite useful for inferring a user's information need. Intuitively, using the summary text, rather than the actual content of the document, makes more sense, as it is quite possible that the document behind a seemingly relevant summary is actually non-relevant. 29 out of the 91 clicked documents are relevant. Updating the query model based on such summaries would bring up the ranks of these relevant documents, causing performance improvement. However, such improvement is really not beneficial for the user as the user has already seen these relevant documents. To see how much improvement we have achieved on improving the ranks of the unseen relevant documents, we exclude these 29 relevant documents from our judgment file and recompute the performance of BayesInt and the baseline using the new judgment file. The results are shown in Table 5. Note that the performance of the baseline method is lower due to the removal of the 29 relevant documents, which would have been generally ranked high in the results. From Table 5, we see clearly that using clicked summaries also helps improve the ranks of unseen relevant documents significantly. Table 5: BayesInt evaluated on unseen relevant documents One remaining question is whether the clickthrough data is still helpful if none of the clicked documents is relevant. To answer this question, we took out the 29 relevant summaries from our clickthrough history data HC to obtain a smaller set of clicked summaries H C, and re-evaluated the performance of the BayesInt method using H C with the same setting of parameters as in Table 4. The results are shown in Table 6. We see that although the improvement is not as substantial as in Table 4, the average precision is improved across all generations of queries. These results should be interpreted as very encouraging as they are based on only 62 non-relevant clickthroughs. In reality, a user would more likely click some relevant summaries, which would help bring up more relevant documents as we have seen in Table 4 and Table 5. Table 4: Effect of using clickthrough data only for document ranking. Table 6: Effect of using only non-relevant clickthrough data 5.2.4 Additive effect of context information By comparing the results across Table 1, Table 2 and Table 4, we can see that the benefit of the query history information and that of clickthrough information are mostly "additive", i.e., combining them can achieve better performance than using each alone, but most improvement has clearly come from the clickthrough information. In Table 7, we show this effect for the BatchUp method. 5.2.5 Parameter sensitivity All four models have two parameters to control the relative weights of HQ, HC, and Qk, though the parameterization is different from model to model. In this subsection, we study the parameter sensitivity for BatchUp, which appears to perform relatively better than others. BatchUp has two parameters µ and v. We first look at µ. When µ is set to 0, the query history is not used at all, and we essentially just use the clickthrough data combined with the current query. If we increase µ, we will gradually incorporate more information from the previous queries. In Table 8, we show how the average precision of BatchUp changes as we vary µ with v fixed to 15.0, where the best performance of BatchUp is achieved. We see that the performance is mostly insensitive to the change of µ for q3 and q4, but is decreasing as µ increases for q2. The pattern is also similar when we set v to other values. In addition to the fact that q1 is generally worse than q2, q3, and q4, another possible reason why the sensitivity is lower for q3 and q4 may be that we generally have more clickthrough data available for q3 and q4 than for q2, and the dominating influence of the clickthrough data has made the small differences caused by µ less visible for q3 and q4. The best performance is generally achieved when µ is around 2.0, which means that the past query information is as useful as about 2 words in the current query. Except for q2, there is clearly some tradeoff between the current query and the previous queries Table 7: Additive benefit of context information and using a balanced combination of them achieves better performance than using each of them alone. We now turn to the other parameter v. When v is set to 0, we only use the clickthrough data; When v is set to + oo, we only use the query history and the current query. With µ set to 2.0, where the best performance of BatchUp is achieved, we vary v and show the results in Table 9. We see that the performance is also not very sensitive when v <30, with the best performance often achieved at v = 15. This means that the combined information of query history and the current query is as useful as about 15 words in the clickthrough data, indicating that the clickthrough information is highly valuable. Overall, these sensitivity results show that BatchUp not only performs better than other methods, but also is quite robust. 6. CONCLUSIONS AND FUTURE WORK In this paper, we have explored how to exploit implicit feedback information, including query history and clickthrough history within the same search session, to improve information retrieval performance. Using the KL-divergence retrieval model as the basis, we proposed and studied four statistical language models for context-sensitive information retrieval, i.e., FixInt, BayesInt, OnlineUp and BatchUp. We use TREC AP Data to create a test set Table 8: Sensitivity of µ in BatchUp Table 9: Sensitivity of v in BatchUp for evaluating implicit feedback models. Experiment results show that using implicit feedback, especially clickthrough history, can substantially improve retrieval performance without requiring any additional user effort. The current work can be extended in several ways: First, we have only explored some very simple language models for incorporating implicit feedback information. It would be interesting to develop more sophisticated models to better exploit query history and clickthrough history. For example, we may treat a clicked summary differently depending on whether the current query is a generalization or refinement of the previous query. Second, the proposed models can be implemented in any practical systems. We are currently developing a client-side personalized search agent, which will incorporate some of the proposed algorithms. We will also do a user study to evaluate effectiveness of these models in the real web search. Finally, we should further study a general retrieval framework for sequential decision making in interactive information retrieval and study how to optimize some of the parameters in the context-sensitive retrieval models.
Context-Sensitive Information Retrieval Using Implicit Feedback ABSTRACT A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored. In this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting. We propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents. We use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set. Experiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially. 1. INTRODUCTION In most existing information retrieval models, the retrieval problem is treated as involving one single query and a set of documents. From a single query, however, the retrieval system can only have very limited clue about the user's information need. An optimal retrieval system thus should try to exploit as much additional context information as possible to improve retrieval accuracy, whenever it is available. Indeed, context-sensitive retrieval has been identified as a major challenge in information retrieval research [2]. There are many kinds of context that we can exploit. Relevance feedback [14] can be considered as a way for a user to provide more context of search and is known to be effective for improving retrieval accuracy. However, relevance feedback requires that a user explicitly provides feedback information, such as specifying the category of the information need or marking a subset of retrieved documents as relevant documents. Since it forces the user to engage additional activities while the benefits are not always obvious to the user, a user is often reluctant to provide such feedback information. Thus the effectiveness of relevance feedback may be limited in real applications. For this reason, implicitfeedback has attracted much attention recently [11, 13, 18, 17, 12]. In general, the retrieval results using the user's initial query may not be satisfactory; often, the user would need to revise the query to improve the retrieval/ranking accuracy [8]. For a complex or difficult information need, the user may need to modify his/her query and view ranked documents with many iterations before the information need is completely satisfied. In such an interactive retrieval scenario, the information naturally available to the retrieval system is more than just the current user query and the document collection--in general, all the interaction history can be available to the retrieval system, including past queries, information about which documents the user has chosen to view, and even how a user has read a document (e.g., which part of a document the user spends a lot of time in reading). We define implicit feedback broadly as exploiting all such naturally available interaction history to improve retrieval results. A major advantage of implicit feedback is that we can improve the retrieval accuracy without requiring any user effort. For example, if the current query is "java", without knowing any extra information, it would be impossible to know whether it is intended to mean the Java programming language or the Java island in Indonesia. As a result, the retrieved documents will likely have both kinds of documents--some may be about the programming language and some may be about the island. However, any particular user is unlikely searching for both types of documents. Such an ambiguity can be resolved by exploiting history information. For example, if we know that the previous query from the user is "cgi programming", it would strongly suggest that it is the programming language that the user is searching for. Implicit feedback was studied in several previous works. In [11], Joachims explored how to capture and exploit the clickthrough information and demonstrated that such implicit feedback information can indeed improve the search accuracy for a group of people. In [18], a simulation study of the effectiveness of different implicit feedback algorithms was conducted, and several retrieval models designed for exploiting clickthrough information were pro posed and evaluated. In [17], some existing retrieval algorithms are adapted to improve search results based on the browsing history of a user. Other related work on using context includes personalized search [1, 3, 4, 7, 10], query log analysis [5], context factors [12], and implicit queries [6]. While the previous work has mostly focused on using clickthrough information, in this paper, we use both clickthrough information and preceding queries, and focus on developing new context-sensitive language models for retrieval. Specifically, we develop models for using implicit feedback information such as query and clickthrough history of the current search session to improve retrieval accuracy. We use the KL-divergence retrieval model [19] as the basis and propose to treat context-sensitive retrieval as estimating a query language model based on the current query and any search context information. We propose several statistical language models to incorporate query and clickthrough history into the KL-divergence model. One challenge in studying implicit feedback models is that there does not exist any suitable test collection for evaluation. We thus use the TREC AP data to create a test collection with implicit feedback information, which can be used to quantitatively evaluate implicit feedback models. To the best of our knowledge, this is the first test set for implicit feedback. We evaluate the proposed models using this data set. The experimental results show that using implicit feedback information, especially the clickthrough data, can substantially improve retrieval performance without requiring additional effort from the user. The remaining sections are organized as follows. In Section 2, we attempt to define the problem of implicit feedback and introduce some terms that we will use later. In Section 3, we propose several implicit feedback models based on statistical language models. In Section 4, we describe how we create the data set for implicit feedback experiments. In Section 5, we evaluate different implicit feedback models on the created data set. Section 6 is our conclusions and future work. 2. PROBLEM DEFINITION 3. LANGUAGE MODELS FOR CONTEXTSENSITIVE INFORMATION RETRIEVAL 3.1 Basic retrieval model 3.2 Fixed Coefficient Interpolation (FixInt) 3.3 Bayesian Interpolation (BayesInt) 3.4 Online Bayesian Updating (OnlineUp) 3.4.1 Bayesian updating 3.4.2 Sequential query model updating 3.5 Batch Bayesian updating (BatchUp) 4. DATA COLLECTION 5. EXPERIMENTS 5.1 Experiment design 5.2 Result analysis 5.2.1 Overall effect of search context 5.2.2 Using query history only 5.2.3 Using clickthrough history only 5.2.4 Additive effect of context information 5.2.5 Parameter sensitivity 6. CONCLUSIONS AND FUTURE WORK In this paper, we have explored how to exploit implicit feedback information, including query history and clickthrough history within the same search session, to improve information retrieval performance. Using the KL-divergence retrieval model as the basis, we proposed and studied four statistical language models for context-sensitive information retrieval, i.e., FixInt, BayesInt, OnlineUp and BatchUp. We use TREC AP Data to create a test set Table 8: Sensitivity of µ in BatchUp Table 9: Sensitivity of v in BatchUp for evaluating implicit feedback models. Experiment results show that using implicit feedback, especially clickthrough history, can substantially improve retrieval performance without requiring any additional user effort. The current work can be extended in several ways: First, we have only explored some very simple language models for incorporating implicit feedback information. It would be interesting to develop more sophisticated models to better exploit query history and clickthrough history. For example, we may treat a clicked summary differently depending on whether the current query is a generalization or refinement of the previous query. Second, the proposed models can be implemented in any practical systems. We are currently developing a client-side personalized search agent, which will incorporate some of the proposed algorithms. We will also do a user study to evaluate effectiveness of these models in the real web search. Finally, we should further study a general retrieval framework for sequential decision making in interactive information retrieval and study how to optimize some of the parameters in the context-sensitive retrieval models.
Context-Sensitive Information Retrieval Using Implicit Feedback ABSTRACT A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored. In this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting. We propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents. We use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set. Experiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially. 1. INTRODUCTION In most existing information retrieval models, the retrieval problem is treated as involving one single query and a set of documents. From a single query, however, the retrieval system can only have very limited clue about the user's information need. An optimal retrieval system thus should try to exploit as much additional context information as possible to improve retrieval accuracy, whenever it is available. Indeed, context-sensitive retrieval has been identified as a major challenge in information retrieval research [2]. There are many kinds of context that we can exploit. Relevance feedback [14] can be considered as a way for a user to provide more context of search and is known to be effective for improving retrieval accuracy. However, relevance feedback requires that a user explicitly provides feedback information, such as specifying the category of the information need or marking a subset of retrieved documents as relevant documents. Since it forces the user to engage additional activities while the benefits are not always obvious to the user, a user is often reluctant to provide such feedback information. Thus the effectiveness of relevance feedback may be limited in real applications. In general, the retrieval results using the user's initial query may not be satisfactory; often, the user would need to revise the query to improve the retrieval/ranking accuracy [8]. We define implicit feedback broadly as exploiting all such naturally available interaction history to improve retrieval results. A major advantage of implicit feedback is that we can improve the retrieval accuracy without requiring any user effort. However, any particular user is unlikely searching for both types of documents. Such an ambiguity can be resolved by exploiting history information. Implicit feedback was studied in several previous works. In [11], Joachims explored how to capture and exploit the clickthrough information and demonstrated that such implicit feedback information can indeed improve the search accuracy for a group of people. In [18], a simulation study of the effectiveness of different implicit feedback algorithms was conducted, and several retrieval models designed for exploiting clickthrough information were pro posed and evaluated. In [17], some existing retrieval algorithms are adapted to improve search results based on the browsing history of a user. While the previous work has mostly focused on using clickthrough information, in this paper, we use both clickthrough information and preceding queries, and focus on developing new context-sensitive language models for retrieval. Specifically, we develop models for using implicit feedback information such as query and clickthrough history of the current search session to improve retrieval accuracy. We use the KL-divergence retrieval model [19] as the basis and propose to treat context-sensitive retrieval as estimating a query language model based on the current query and any search context information. We propose several statistical language models to incorporate query and clickthrough history into the KL-divergence model. One challenge in studying implicit feedback models is that there does not exist any suitable test collection for evaluation. We thus use the TREC AP data to create a test collection with implicit feedback information, which can be used to quantitatively evaluate implicit feedback models. To the best of our knowledge, this is the first test set for implicit feedback. We evaluate the proposed models using this data set. The experimental results show that using implicit feedback information, especially the clickthrough data, can substantially improve retrieval performance without requiring additional effort from the user. The remaining sections are organized as follows. In Section 2, we attempt to define the problem of implicit feedback and introduce some terms that we will use later. In Section 3, we propose several implicit feedback models based on statistical language models. In Section 4, we describe how we create the data set for implicit feedback experiments. In Section 5, we evaluate different implicit feedback models on the created data set. Section 6 is our conclusions and future work. 6. CONCLUSIONS AND FUTURE WORK In this paper, we have explored how to exploit implicit feedback information, including query history and clickthrough history within the same search session, to improve information retrieval performance. Using the KL-divergence retrieval model as the basis, we proposed and studied four statistical language models for context-sensitive information retrieval, i.e., FixInt, BayesInt, OnlineUp and BatchUp. We use TREC AP Data to create a test set Table 8: Sensitivity of µ in BatchUp Table 9: Sensitivity of v in BatchUp for evaluating implicit feedback models. Experiment results show that using implicit feedback, especially clickthrough history, can substantially improve retrieval performance without requiring any additional user effort. The current work can be extended in several ways: First, we have only explored some very simple language models for incorporating implicit feedback information. It would be interesting to develop more sophisticated models to better exploit query history and clickthrough history. Second, the proposed models can be implemented in any practical systems. We are currently developing a client-side personalized search agent, which will incorporate some of the proposed algorithms. We will also do a user study to evaluate effectiveness of these models in the real web search. Finally, we should further study a general retrieval framework for sequential decision making in interactive information retrieval and study how to optimize some of the parameters in the context-sensitive retrieval models.
H-84
Event Threading within News Topics
With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.
[ "event thread", "event", "thread", "automat techniqu", "flat hierarchi", "depend", "event model", "novel featur", "tempor local", "event recognit", "time-order", "new organ", "topic detect", "topic cluster", "inter-relat event", "semin event", "quick overview", "hidden markov model", "flatclust", "atom", "microscop event", "map function", "direct edg", "time order", "agglom cluster", "cosin similar", "term vector", "simpl threshold", "maximum span tree", "correct granular", "depend precis", "depend recal", "depend f-measur", "temporalloc", "timedecai", "cluster" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "M", "M", "M", "M", "U", "M", "U", "U", "M", "U", "U", "U", "U", "U", "U", "U", "U", "U", "M", "M", "M", "U", "U", "U" ]
Event Threading within News Topics Ramesh Nallapati, Ao Feng, Fuchun Peng, James Allan Center for Intelligent Information Retrieval Department of Computer Science University of Massachusetts Amherst, MA 01003 nmramesh,aofeng,fuchun,allan @cs. umass.edu ABSTRACT With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Clustering General Terms Algorithms, Experimentation, Measurement 1. INTRODUCTION News forms a major portion of information disseminated in the world everyday. Common people and news analysts alike are very interested in keeping abreast of new things that happen in the news, but it is becoming very difficult to cope with the huge volumes of information that arrives each day. Hence there is an increasing need for automatic techniques to organize news stories in a way that helps users interpret and analyze them quickly. This problem is addressed by a research program called Topic Detection and Tracking (TDT) [3] that runs an open annual competition on standardized tasks of news organization. One of the shortcomings of current TDT evaluation is its view of news topics as flat collection of stories. For example, the detection task of TDT is to arrange a collection of news stories into clusters of topics. However, a topic in news is more than a mere collection of stories: it is characterized by a definite structure of inter-related events. This is indeed recognized by TDT which defines a topic as `a set of news stories that are strongly related by some seminal realworld event'' where an event is defined as `something that happens at a specific time and location'' [3]. For example, when a bomb explodes in a building, that is the seminal event that triggers the topic. Other events in the topic may include the rescue attempts, the search for perpetrators, arrests and trials and so on. We see that there is a pattern of dependencies between pairs of events in the topic. In the above example, the event of rescue attempts is `influenced'' by the event of bombing and so is the event of search for perpetrators. In this work we investigate methods for modeling the structure of a topic in terms of its events. By structure, we mean not only identifying the events that make up a topic, but also establishing dependencies-generally causal-among them. We call the process of recognizing events and identifying dependencies among them event threading, an analogy to email threading that shows connections between related email messages. We refer to the resulting interconnected structure of events as the event model of the topic. Although this paper focuses on threading events within an existing news topic, we expect that such event based dependency structure more accurately reflects the structure of news than strictly bounded topics do. From a user``s perspective, we believe that our view of a news topic as a set of interconnected events helps him/her get a quick overview of the topic and also allows him/her navigate through the topic faster. The rest of the paper is organized as follows. In section 2, we discuss related work. In section 3, we define the problem and use an example to illustrate threading of events within a news topic. In section 4, we describe how we built the corpus for our problem. Section 5 presents our evaluation techniques while section 6 describes the techniques we use for modeling event structure. In section 7 we present our experiments and results. Section 8 concludes the paper with a few observations on our results and comments on future work. 446 2. RELATED WORK The process of threading events together is related to threading of electronic mail only by name for the most part. Email usually incorporates a strong structure of referenced messages and consistently formatted subject headings-though information retrieval techniques are useful when the structure breaks down [7]. Email threading captures reference dependencies between messages and does not attempt to reflect any underlying real-world structure of the matter under discussion. Another area of research that looks at the structure within a topic is hierarchical text classification of topics [9, 6]. The hierarchy within a topic does impose a structure on the topic, but we do not know of an effort to explore the extent to which that structure reflects the underlying event relationships. Barzilay and Lee [5] proposed a content structure modeling technique where topics within text are learnt using unsupervised methods, and a linear order of these topics is modeled using hidden Markov models. Our work differs from theirs in that we do not constrain the dependency to be linear. Also their algorithms are tuned to work on specific genres of topics such as earthquakes, accidents, etc., while we expect our algorithms to generalize over any topic. In TDT, researchers have traditionally considered topics as flatclusters [1]. However, in TDT-2003, a hierarchical structure of topic detection has been proposed and [2] made useful attempts to adopt the new structure. However this structure still did not explicitly model any dependencies between events. In a work closest to ours, Makkonen [8] suggested modeling news topics in terms of its evolving events. However, the paper stopped short of proposing any models to the problem. Other related work that dealt with analysis within a news topic includes temporal summarization of news topics [4]. 3. PROBLEM DEFINITION AND NOTATION In this work, we have adhered to the definition of event and topic as defined in TDT. We present some definitions (in italics) and our interpretations (regular-faced) below for clarity. 1. Story: A story is a news article delivering some information to users. In TDT, a story is assumed to refer to only a single topic. In this work, we also assume that each story discusses a single event. In other words, a story is the smallest atomic unit in the hierarchy (topic event story). Clearly, both the assumptions are not necessarily true in reality, but we accept them for simplicity in modeling. 2. Event: An event is something that happens at some specific time and place [10]. In our work, we represent an event by a set of stories that discuss it. Following the assumption of atomicity of a story, this means that any set of distinct events can be represented by a set of non-overlapping clusters of news stories. 3. Topic: A set of news stories strongly connected by a seminal event. We expand on this definition and interpret a topic as a series of related events. Thus a topic can be represented by clusters of stories each representing an event and a set of (directed or undirected) edges between pairs of these clusters representing the dependencies between these events. We will describe this representation of a topic in more detail in the next section. 4. Topic detection and tracking (TDT) :Topic detection detects clusters of stories that discuss the same topic; Topic tracking detects stories that discuss a previously known topic [3]. Thus TDT concerns itself mainly with clustering stories into topics that discuss them. 5. Event threading: Event threading detects events within in a topic, and also captures the dependencies among the events. Thus the main difference between event threading and TDT is that we focus our modeling effort on microscopic events rather than larger topics. Additionally event threading models the relatedness or dependencies between pairs of events in a topic while TDT models topics as unrelated clusters of stories. We first define our problem and representation of our model formally and then illustrate with the help of an example. We are given a set of Ò news stories Ë ×1/2 ¡ ¡ ¡ ×Ò on a given topic Ì and their time of publication. We define a set of events 1/2 ¡ ¡ ¡ Ñ with the following constraints: 3/4 3/4 Ë (1) × Ø (2) × 3/4 × Ø × 3/4 (3) While the first constraint says that each event is an element in the power set of S, the second constraint ensures that each story can belong to at most one event. The last constraint tells us that every story belongs to one of the events in . In fact this allows us to define a mapping function from stories to events as follows: ´× µ iff × 3/4 (4) Further, we also define a set of directed edges ´ µ which denote dependencies between events. It is important to explain what we mean by this directional dependency: While the existence of an edge itself represents relatedness of two events, the direction could imply causality or temporal-ordering. By causal dependency we mean that the occurrence of event B is related to and is a consequence of the occurrence of event A. By temporal ordering, we mean that event B happened after event A and is related to A but is not necessarily a consequence of A. For example, consider the following two events: `plane crash'' (event A) and `subsequent investigations'' (event B) in a topic on a plane crash incident. Clearly, the investigations are a result of the crash. Hence an arrow from A to B falls under the category of causal dependency. Now consider the pair of events `Pope arrives in Cuba''(event A) and `Pope meets Castro''(event B) in a topic that discusses Pope``s visit to Cuba. Now events A and B are closely related through their association with the Pope and Cuba but event B is not necessarily a consequence of the occurrence of event A. An arrow in such scenario captures what we call time ordering. In this work, we do not make an attempt to distinguish between these two kinds of dependencies and our models treats them as identical. A simpler (and hence less controversial) choice would be to ignore direction in the dependencies altogether and consider only undirected edges. This choice definitely makes sense as a first step but we chose the former since we believe directional edges make more sense to the user as they provide a more illustrative flow-chart perspective to the topic. To make the idea of event threading more concrete, consider the example of TDT3 topic 30005, titled `Osama bin Laden``s Indictment'' (in the 1998 news). This topic has 23 stories which form 5 events. An event model of this topic can be represented as in figure 1. Each box in the figure indicates an event in the topic of Osama``s indictment. The occurrence of event 2, namely `Trial and Indictment of Osama'' is dependent on the event of `evidence gathered by CIA'', i.e., event 1. Similarly, event 2 influences the occurrences of events 3, 4 and 5, namely `Threats from Militants'', `Reactions 447 from Muslim World'' and `announcement of reward''. Thus all the dependencies in the example are causal. Extending our notation further, we call an event A a parent of B and B the child of A, if ´ µ 3/4 . We define an event model Å ´ µ to be a tuple of the set of events and set of dependencies. Trial and (5) (3) (4) CIA announces reward Muslim world Reactions from Islamic militants Threats from (2) (1) Osama Indictment of CIA gathered by Evidence Figure 1: An event model of TDT topic `Osama bin Laden``s indictment''. Event threading is strongly related to topic detection and tracking, but also different from it significantly. It goes beyond topics, and models the relationships between events. Thus, event threading can be considered as a further extension of topic detection and tracking and is more challenging due to at least the following difficulties. 1. The number of events is unknown. 2. The granularity of events is hard to define. 3. The dependencies among events are hard to model. 4. Since it is a brand new research area, no standard evaluation metrics and benchmark data is available. In the next few sections, we will describe our attempts to tackle these problems. 4. LABELED DATA We picked 28 topics from the TDT2 corpus and 25 topics from the TDT3 corpus. The criterion we used for selecting a topic is that it should contain at least 15 on-topic stories from CNN headline news. If the topic contained more than 30 CNN stories, we picked only the first 30 stories to keep the topic short enough for annotators. The reason for choosing only CNN as the source is that the stories from this source tend to be short and precise and do not tend to digress or drift too far away from the central theme. We believe modeling such stories would be a useful first step before dealing with more complex data sets. We hired an annotator to create truth data. Annotation includes defining the event membership for each story and also the dependencies. We supervised the annotator on a set of three topics that we did our own annotations on and then asked her to annotate the 28 topics from TDT2 and 25 topics from TDT3. In identifying events in a topic, the annotator was asked to broadly follow the TDT definition of an event, i.e., `something that happens at a specific time and location''. The annotator was encouraged to merge two events A and B into a single event C if any of the stories discusses both A and B. This is to satisfy our assumption that each story corresponds to a unique event. The annotator was also encouraged to avoid singleton events, events that contain a single news story, if possible. We realized from our own experience that people differ in their perception of an event especially when the number of stories in that event is small. As part of the guidelines, we instructed the annotator to assign titles to all the events in each topic. We believe that this would help make her understanding of the events more concrete. We however, do not use or model these titles in our algorithms. In defining dependencies between events, we imposed no restrictions on the graph structure. Each event could have single, multiple or no parents. Further, the graph could have cycles or orphannodes. The annotator was however instructed to assign a dependency from event A to event B if and only if the occurrence of B is `either causally influenced by A or is closely related to A and follows A in time''. From the annotated topics, we created a training set of 26 topics and a test set of 27 topics by merging the 28 topics from TDT2 and 25 from TDT3 and splitting them randomly. Table 1 shows that the training and test sets have fairly similar statistics. Feature Training set Test set Num. topics 26 27 Avg. Num. Stories/Topic 28.69 26.74 Avg. Doc. Len. 64.60 64.04 Avg. Num. Stories/Event 5.65 6.22 Avg. Num. Events/Topic 5.07 4.29 Avg. Num. Dependencies/Topic 3.07 2.92 Avg. Num. Dependencies/Event 0.61 0.68 Avg. Num. Days/Topic 30.65 34.48 Table 1: Statistics of annotated data 5. EVALUATION A system can generate some event model Å1/4 ´ 1/4 1/4µ using certain algorithms, which is usually different from the truth model Å ´ µ (we assume the annotator did not make any mistake). Comparing a system event model Å1/4 with the true model Å requires comparing the entire event models including their dependency structure. And different event granularities may bring huge discrepancy between Å1/4 and Å. This is certainly non-trivial as even testing whether two graphs are isomorphic has no known polynomial time solution. Hence instead of comparing the actual structure we examine a pair of stories at a time and verify if the system and true labels agree on their event-memberships and dependencies. Specifically, we compare two kinds of story pairs: ¯ Cluster pairs ( ´Åµ): These are the complete set of unordered pairs ´× × µ of stories × and × that fall within the same event given a model Å. Formally, ´Åµ ´× × µ × × 3/4 Ë ´× µ ´× µ (5) where is the function in Å that maps stories to events as defined in equation 4. ¯ Dependency pairs ( ´Åµ): These are the set of all ordered pairs of stories ´× × µ such that there is a dependency from the event of × to the event of × in the model Å. ´Åµ ´× × µ ´ ´× µ ´× µµ 3/4 (6) Note the story pair is ordered here, so ´× × µ is not equivalent to ´× × µ. In our evaluation, a correct pair with wrong 448 (B->D) Cluster pairs (A,C) Dependency pairs (A->B) (C->B) (B->D) D,E D,E (D,E) (D,E) (A->C) (A->E) (B->C) (B->E) (B->E) Cluster precision: 1/2 Cluster Recall: 1/2 Dependency Recall: 2/6 Dependency Precision: 2/4 (A->D) True event model System event model A,B C A,C B Cluster pairs (A,B) Dependency pairs Figure 2: Evaluation measures direction will be considered a mistake. As we mentioned earlier in section 3, ignoring the direction may make the problem simpler, but we will lose the expressiveness of our representation. Given these two sets of story pairs corresponding to the true event model Å and the system event model Å1/4, we define recall and precision for each category as follows. ¯ Cluster Precision (CP): It is the probability that two randomly selected stories × and × are in the same true-event given that they are in the same system event. È È´ ´× µ ´× µ 1/4´× µ 1/4´× µµ ´Åµ ´Å1/4µ ´Å1/4µ (7) where 1/4 is the story-event mapping function corresponding to the model Å1/4. ¯ Cluster Recall(CR): It is the probability that two randomly selected stories × and × are in the same system-event given that they are in the same true event. Ê È´ 1/4´× µ 1/4´× µ ´× µ ´× µµ ´Åµ ´Å1/4µ ´Åµ (8) ¯ Dependency Precision(DP): It is the probability that there is a dependency between the events of two randomly selected stories × and × in the true model Å given that they have a dependency in the system model Å1/4. Note that the direction of dependency is important in comparison. È È´´ ´× µ ´× µµ 3/4 ´ 1/4´× µ 1/4´× µµ 3/4 1/4µ ´Åµ ´Å1/4µ ´Å1/4µ (9) ¯ Dependency Recall(DR): It is the probability that there is a dependency between the events of two randomly selected stories × and × in the system model Å1/4 given that they have a dependency in the true model Å. Again, the direction of dependency is taken into consideration. Ê È´´ 1/4´× µ 1/4´× µµ 3/4 1/4 ´ ´× µ ´× µµ 3/4 µ ´Åµ ´Å1/4µ ´Åµ (10) The measures are illustrated by an example in figure 2. We also combine these measures using the well known F1-measure commonly used in text classification and other research areas as shown below. 3/4 cents È cents Ê È · Ê 3/4 cents È cents Ê È · Ê Â 3/4 cents cents · (11) where and are the cluster and dependency F1-measures respectively and  is the Joint F1-measure ( ) that we use to measure the overall performance. 6. TECHNIQUES The task of event modeling can be split into two parts: clustering the stories into unique events in the topic and constructing dependencies among them. In the following subsections, we describe techniques we developed for each of these sub-tasks. 6.1 Clustering Each topic is composed of multiple events, so stories must be clustered into events before we can model the dependencies among them. For simplicity, all stories in the same topic are assumed to be available at one time, rather than coming in a text stream. This task is similar to traditional clustering but features other than word distributions may also be critical in our application. In many text clustering systems, the similarity between two stories is the inner product of their tf-idf vectors, hence we use it as one of our features. Stories in the same event tend to follow temporal locality, so the time stamp of each story can be a useful feature. Additionally, named-entities such as person and location names are another obvious feature when forming events. Stories in the same event tend to be related to the same person(s) and locations(s). In this subsection, we present an agglomerative clustering algorithm that combines all these features. In our experiments, however, we study the effect of each feature on the performance separately using modified versions of this algorithm. 6.1.1 Agglomerative clustering with time decay (ACDT) We initialize our events to singleton events (clusters), i.e., each cluster contains exactly one story. So the similarity between two events, to start with, is exactly the similarity between the corresponding stories. The similarity Û×ÙÑ´×1/2 ×3/4µ between two stories ×1/2 and ×3/4 is given by the following formula: Û×ÙÑ´×1/2 ×3/4µ 1/2 Ó×´×1/2 ×3/4µ · 3/4ÄÓ ´×1/2 ×3/4µ · ¿È Ö´×1/2 ×3/4µ (12) Here 1/2, 3/4, ¿ are the weights on different features. In this work, we determined them empirically, but in the future, one can consider more sophisticated learning techniques to determine them. Ó×´×1/2 ×3/4µ is the cosine similarity of the term vectors. ÄÓ ´×1/2 ×3/4µ is 1 if there is some location that appears in both stories, otherwise it is 0. È Ö´×1/2 ×3/4µ is similarly defined for person name. We use time decay when calculating similarity of story pairs, i.e., the larger time difference between two stories, the smaller their similarities. The time period of each topic differs a lot, from a few days to a few months. So we normalize the time difference using the whole duration of that topic. The time decay adjusted similarity 449 × Ñ´×1/2 ×3/4µ is given by × Ñ´×1/2 ×3/4µ Û×ÙÑ´×1/2 ×3/4µ `` Ø1/2 Ø3/4 Ì (13) where Ø1/2 and Ø3/4 are the time stamps for story 1 and 2 respectively. T is the time difference between the earliest and the latest story in the given topic. `` is the time decay factor. In each iteration, we find the most similar event pair and merge them. We have three different ways to compute the similarity between two events Ù and Ú: ¯ Average link: In this case the similarity is the average of the similarities of all pairs of stories between Ù and Ú as shown below: × Ñ´ Ù Ú µ È×Ù3/4 Ù È×Ú3/4 Ú × Ñ´×Ù ×Ú µ Ù Ú (14) ¯ Complete link: The similarity between two events is given by the smallest of the pair-wise similarities. × Ñ´ Ù Ú µ Ñ Ò ×Ù3/4 Ù ×Ú3/4 Ú × Ñ´×Ù ×Ú µ (15) ¯ Single link: Here the similarity is given by the best similarity between all pairs of stories. × Ñ´ Ù Ú µ Ñ Ü ×Ù3/4 Ù ×Ú3/4 Ú × Ñ´×Ù ×Ú µ (16) This process continues until the maximum similarity falls below the threshold or the number of clusters is smaller than a given number. 6.2 Dependency modeling Capturing dependencies is an extremely hard problem because it may require a `deeper understanding'' of the events in question. A human annotator decides on dependencies not just based on the information in the events but also based on his/her vast repertoire of domain-knowledge and general understanding of how things operate in the world. For example, in Figure 1 a human knows `Trial and indictment of Osama'' is influenced by `Evidence gathered by CIA'' because he/she understands the process of law in general. We believe a robust model should incorporate such domain knowledge in capturing dependencies, but in this work, as a first step, we will rely on surface-features such as time-ordering of news stories and word distributions to model them. Our experiments in later sections demonstrate that such features are indeed useful in capturing dependencies to a large extent. In this subsection, we describe the models we considered for capturing dependencies. In the rest of the discussion in this subsection, we assume that we are already given the mapping 1/4 Ë and we focus only on modeling the edges 1/4. First we define a couple of features that the following models will employ. First we define a 1-1 time-ordering function Ø Ë 1/2 ¡ ¡ ¡ Ò that sorts stories in ascending order by their time of publication. Now, the event-time-ordering function Ø is defined as follows. Ø 1/2 ¡ ¡ ¡ Ñ × Ø Ù Ú 3/4 Ø ´ Ùµ Ø ´ Úµ ´µ Ñ Ò ×Ù3/4 Ù Ø´×Ùµ Ñ Ò ×Ú3/4 Ú Ø´×Úµ (17) In other words, Ø time-orders events based on the time-ordering of their respective first stories. We will also use average cosine similarity between two events as a feature and it is defined as follows. Ú Ë Ñ´ Ù Ú µ È×Ù3/4 Ù È×Ú3/4 Ú Ó×´×Ù ×Ú µ Ù Ú (18) 6.2.1 Complete-Link model In this model, we assume that there are dependencies between all pairs of events. The direction of dependency is determined by the time-ordering of the first stories in the respective events. Formally, the system edges are defined as follows. 1/4 ´ Ù Ú µ Ø ´ Ùµ Ø ´ Ú µ (19) where Ø is the event-time-ordering function. In other words, the dependency edge is directed from event Ù to event Ú , if the first story in event Ù is earlier than the first story in event Ú . We point out that this is not to be confused with the complete-link algorithm in clustering. Although we use the same names, it will be clear from the context which one we refer to. 6.2.2 Simple Thresholding This model is an extension of the complete link model with an additional constraint that there is a dependency between any two events Ù and Ú only if the average cosine similarity between event Ù and event Ú is greater than a threshold Ì. Formally, 1/4 ´ Ù Úµ Ú Ë Ñ´ Ù Ú µ Ì Ø ´ Ùµ Ø ´ Ú µ (20) 6.2.3 Nearest Parent Model In this model, we assume that each event can have at most one parent. We define the set of dependencies as follows. 1/4 ´ Ù Úµ Ú Ë Ñ´ Ù Ú µ Ì Ø ´ Úµ Ø ´ Ùµ · 1/2 (21) Thus, for each event Ú , the nearest parent model considers only the event preceding it as defined by Ø as a potential candidate. The candidate is assigned as the parent only if the average similarity exceeds a pre-defined threshold Ì. 6.2.4 Best Similarity Model This model also assumes that each event can have at most one parent. An event Ú is assigned a parent Ù if and only if Ù is the most similar earlier event to Ú and the similarity exceeds a threshold Ì. Mathematically, this can be expressed as: 1/4 ´ Ù Ú µ Ú Ë Ñ´ Ù Úµ Ì Ù Ö Ñ Ü Û Ø ´ Ûµ Ø ´ Úµ Ú Ë Ñ´ Û Ú µ (22) 6.2.5 Maximum Spanning Tree model In this model, we first build a maximum spanning tree (MST) using a greedy algorithm on the following fully connected weighted, undirected graph whose vertices are the events and whose edges are defined as follows: ´ Ù Ú µ Û´ Ù Ú µ Ú Ë Ñ´ Ù Úµ (23) Let ÅËÌ´ µ be the set of edges in the maximum spanning tree of 1/4. Now our directed dependency edges are defined as follows. 1/4 ´ Ù Ú µ ´ Ù Ú µ 3/4 ÅËÌ´ µ Ø ´ Ùµ Ø ´ Úµ Ú Ë Ñ´ Ù Ú µ Ì (24) 450 Thus in this model, we assign dependencies between the most similar events in the topic. 7. EXPERIMENTS Our experiments consists of three parts. First we modeled only the event clustering part (defining the mapping function 1/4) using clustering algorithms described in section 6.1. Then we modeled only the dependencies by providing to the system the true clusters and running only the dependency algorithms of section 6.2. Finally, we experimented with combinations of clustering and dependency algorithms to produce the complete event model. This way of experimentation allows us to compare the performance of our algorithms in isolation and in association with other components. The following subsections present the three parts of our experimentation. 7.1 Clustering We have tried several variations of the Ì algorithm to study the effects of various features on the clustering performance. All the parameters are learned by tuning on the training set. We also tested the algorithms on the test set with parameters fixed at their optimal values learned from training. We used agglomerative clusModel best T CP CR CF P-value cos+1-lnk 0.15 0.41 0.56 0.43cos+all-lnk 0.00 0.40 0.62 0.45cos+Loc+avg-lnk 0.07 0.37 0.74 0.45cos+Per+avg-lnk 0.07 0.39 0.70 0.46cos+TD+avg-lnk 0.04 0.45 0.70 0.53 2.9e-4* cos+N(T)+avg-lnk - 0.41 0.62 0.48 7.5e-2 cos+N(T)+T+avg-lnk 0.03 0.42 0.62 0.49 2.4e-2* cos+TD+N(T)+avg-lnk - 0.44 0.66 0.52 7.0e-3* cos+TD+N(T)+T+avg-lnk 0.03 0.47 0.64 0.53 1.1e-3* Baseline(cos+avg-lnk) 0.05 0.39 0.67 0.46Table 2: Comparison of agglomerative clustering algorithms (training set) tering based on only cosine similarity as our clustering baseline. The results on the training and test sets are in Table 2 and 3 respectively. We use the Cluster F1-measure (CF) averaged over all topics as our evaluation criterion. Model CP CR CF P-value cos+1-lnk 0.43 0.49 0.39cos+all-lnk 0.43 0.62 0.47cos+Loc+avg-lnk 0.37 0.73 0.45cos+Per+avg-lnk 0.44 0.62 0.45cos+TD+avg-lnk 0.48 0.70 0.54 0.014* cos+N(T)+avg-lnk 0.41 0.71 0.51 0.31 cos+N(T)+T+avg-lnk 0.43 0.69* 0.52 0.14 cos+TD+N(T)+avg-lnk 0.43 0.76 0.54 0.025* cos+TD+N(T)+T+avg-lnk 0.47 0.69 0.54 0.0095* Baseline(cos+avg-lnk) 0.44 0.67 0.50Table 3: Comparison of agglomerative clustering algorithms (test set) P-value marked with a # means that it is a statistically significant improvement over the baseline (95% confidence level, one tailed T-test). The methods shown in table 2 and 3 are: ¯ Baseline: tf-idf vector weight, cosine similarity, average link in clustering. In equation 12, 1/2 1/2, 3/4 ¿ 1/4. And `` 1/4 in equation 13. This F-value is the maximum obtained by tuning the threshold. ¯ cos+1-lnk: Single link comparison (see equation 16) is used where similarity of two clusters is the maximum of all story pairs, other configurations are the same as the baseline run. ¯ cos+all-lnk: Complete link algorithm of equation 15 is used. Similar to single link but it takes the minimum similarity of all story pairs. ¯ cos+Loc+avg-lnk: Location names are used when calculating similarity. 3/4 1/4 1/4 in equation 12. All algorithms starting from this one use average link (equation 14), since single link and complete link do not show any improvement of performance. ¯ cos+Per+avg-lnk: ¿ 1/4 1/4 in equation 12, i.e., we put some weight on person names in the similarity. ¯ cos+TD+avg-lnk: Time Decay coefficient `` 1/2 in equation 13, which means the similarity between two stories will be decayed to 1/2 if they are at different ends of the topic. ¯ cos+N(T)+avg-lnk: Use the number of true events to control the agglomerative clustering algorithm. When the number of clusters is fewer than that of truth events, stop merging clusters. ¯ cos+N(T)+T+avg-lnk: similar to N(T) but also stop agglomeration if the maximal similarity is below the threshold Ì. ¯ cos+TD:+N(T)+avg-lnk: similar to N(T) but the similarities are decayed, `` 1/2 in equation 13. ¯ cos+TD+N(T)+T+avg-lnk: similar to TD+N(Truth) but calculation halts when the maximal similarity is smaller than the threshold Ì. Our experiments demonstrate that single link and complete link similarities perform worse than average link, which is reasonable since average link is less sensitive to one or two story pairs. We had expected locations and person names to improve the result, but it is not the case. Analysis of topics shows that many on-topic stories share the same locations or persons irrespective of the event they belong to, so these features may be more useful in identifying topics rather than events. Time decay is successful because events are temporally localized, i.e., stories discussing the same event tend to be adjacent to each other in terms of time. Also we noticed that providing the number of true events improves the performance since it guides the clustering algorithm to get correct granularity. However, for most applications, it is not available. We used it only as a cheat experiment for comparison with other algorithms. On the whole, time decay proved to the most powerful feature besides cosine similarity on both training and test sets. 7.2 Dependencies In this subsection, our goal is to model only dependencies. We use the true mapping function and by implication the true events Î . We build our dependency structure 1/4 using all the five models described in section 6.2. We first train our models on the 26 training topics. Training involves learning the best threshold Ì for each of the models. We then test the performances of all the trained models on the 27 test topics. We evaluate our performance 451 using the average values of Dependency Precision (DP), Dependency Recall (DR) and Dependency F-measure (DF). We consider the complete-link model to be our baseline since for each event, it trivially considers all earlier events to be parents. Table 4 lists the results on the training set. We see that while all the algorithms except MST outperform the baseline complete-link algorithm , the nearest Parent algorithm is statistically significant from the baseline in terms of its DF-value using a one-tailed paired T-test at 95% confidence level. Model best Ì DP DR DF P-value Nearest Parent 0.025 0.55 0.62 0.56 0.04* Best Similarity 0.02 0.51 0.62 0.53 0.24 MST 0.0 0.46 0.58 0.48Simple Thresh. 0.045 0.45 0.76 0.52 0.14 Complete-link - 0.36 0.93 0.48Table 4: Results on the training set: Best Ì is the optimal value of the threshold Ì. * indicates the corresponding model is statistically significant compared to the baseline using a one-tailed, paired T-test at 95% confidence level. In table 5 we present the comparison of the models on the test set. Here, we do not use any tuning but set the threshold to the corresponding optimal values learned from the training set. The results throw some surprises: The nearest parent model, which was significantly better than the baseline on training set, turns out to be worse than the baseline on the test set. However all the other models are better than the baseline including the best similarity which is statistically significant. Notice that all the models that perform better than the baseline in terms of DF, actually sacrifice their recall performance compared to the baseline, but improve on their precision substantially thereby improving their performance on the DF-measure. We notice that both simple-thresholding and best similarity are better than the baseline on both training and test sets although the improvement is not significant. On the whole, we observe that the surface-level features we used capture the dependencies to a reasonable level achieving a best value of 0.72 DF on the test set. Although there is a lot of room for improvement, we believe this is a good first step. Model DP DR DF P-value Nearest Parent 0.61 0.60 0.60Best Similarity 0.71 0.74 0.72 0.04* MST 0.70 0.68 0.69 0.22 Simple Thresh. 0.57 0.75 0.64 0.24 Baseline (Complete-link) 0.50 0.94 0.63Table 5: Results on the test set 7.3 Combining Clustering and Dependencies Now that we have studied the clustering and dependency algorithms in isolation, we combine the best performing algorithms and build the entire event model. Since none of the dependency algorithms has been shown to be consistently and significantly better than the others, we use all of them in our experimentation. From the clustering techniques, we choose the best performing Cos+TD. As a baseline, we use a combination of the baselines in each components, i.e., cos for clustering and complete-link for dependencies. Note that we need to retrain all the algorithms on the training set because our objective function to optimize is now JF, the joint F-measure. For each algorithm, we need to optimize both the clustering threshold and the dependency threshold. We did this empirically on the training set and the optimal values are listed in table 6. The results on the training set, also presented in table 6, indicate that cos+TD+Simple-Thresholding is significantly better than the baseline in terms of the joint F-value JF, using a one-tailed paired Ttest at 95% confidence level. On the whole, we notice that while the clustering performance is comparable to the experiments in section 7.1, the overall performance is undermined by the low dependency performance. Unlike our experiments in section 7.2 where we had provided the true clusters to the system, in this case, the system has to deal with deterioration in the cluster quality. Hence the performance of the dependency algorithms has suffered substantially thereby lowering the overall performance. The results on the test set present a very similar story as shown in table 7. We also notice a fair amount of consistency in the performance of the combination algorithms. cos+TD+Simple-Thresholding outperforms the baseline significantly. The test set results also point to the fact that the clustering component remains a bottleneck in achieving an overall good performance. 8. DISCUSSION AND CONCLUSIONS In this paper, we have presented a new perspective of modeling news topics. Contrary to the TDT view of topics as flat collection of news stories, we view a news topic as a relational structure of events interconnected by dependencies. In this paper, we also proposed a few approaches for both clustering stories into events and constructing dependencies among them. We developed a timedecay based clustering approach that takes advantage of temporallocalization of news stories on the same event and showed that it performs significantly better than the baseline approach based on cosine similarity. Our experiments also show that we can do fairly well on dependencies using only surface-features such as cosinesimilarity and time-stamps of news stories as long as true events are provided to the system. However, the performance deteriorates rapidly if the system has to discover the events by itself. Despite that discouraging result, we have shown that our combined algorithms perform significantly better than the baselines. Our results indicate modeling dependencies can be a very hard problem especially when the clustering performance is below ideal level. Errors in clustering have a magnifying effect on errors in dependencies as we have seen in our experiments. Hence, we should focus not only on improving dependencies but also on clustering at the same time. As part of our future work, we plan to investigate further into the data and discover new features that influence clustering as well as dependencies. And for modeling dependencies, a probabilistic framework should be a better choice since there is no definite answer of yes/no for the causal relations among some events. We also hope to devise an iterative algorithm which can improve clustering and dependency performance alternately as suggested by one of the reviewers. We also hope to expand our labeled corpus further to include more diverse news sources and larger and more complex event structures. Acknowledgments We would like to thank the three anonymous reviewers for their valuable comments. This work was supported in part by the Center 452 Model Cluster T Dep. T CP CR CF DP DR DF JF P-value cos+TD+Nearest-Parent 0.055 0.02 0.51 0.53 0.49 0.21 0.19 0.19 0.27cos+TD+Best-Similarity 0.04 0.02 0.45 0.70 0.53 0.21 0.33 0.23 0.32cos+TD+MST 0.04 0.00 0.45 0.70 0.53 0.22 0.35 0.25 0.33cos+TD+Simple-Thresholding 0.065 0.02 0.56 0.47 0.48 0.23 0.61 0.32 0.38 0.0004* Baseline (cos+Complete-link) 0.10 - 0.58 0.31 0.38 0.20 0.67 0.30 0.33Table 6: Combined results on the training set Model CP CR CF DP DR DF JF P-value cos+TD+Nearest Parent 0.57 0.50 0.50 0.27 0.19 0.21 0.30cos+TD+Best Similarity 0.48 0.70 0.54 0.31 0.27 0.26 0.35cos+TD+MST 0.48 0.70 0.54 0.31 0.30 0.28 0.37cos+TD+Simple Thresholding 0.60 0.39 0.44 0.32 0.66 0.42 0.43 0.0081* Baseline (cos+Complete-link) 0.66 0.27 0.36 0.30 0.72 0.43 0.39Table 7: Combined results on the test set for Intelligent Information Retrieval and in part by SPAWARSYSCENSD grant number N66001-02-1-8903. Any opinions, findings and conclusions or recommendations expressed in this material are the authors'' and do not necessarily reflect those of the sponsor. 9. REFERENCES [1] J. Allan, J. Carbonell, G. Doddington, J. Yamron, and Y. Yang. Topic detection and tracking pilot study: Final report. In Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop, pages 194-218, 1998. [2] J. Allan, A. Feng, and A. Bolivar. Flexible intrinsic evaluation of hierarchical clustering for tdt. volume In the Proc. of the ACM Twelfth International Conference on Information and Knowledge Management, pages 263-270, Nov 2003. [3] James Allan, editor. Topic Detection and Tracking:Event based Information Organization. Kluwer Academic Publishers, 2000. [4] James Allan, Rahul Gupta, and Vikas Khandelwal. Temporal summaries of new topics. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 10-18. ACM Press, 2001. [5] Regina Barzilay and Lillian Lee. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of Human Language Technology Conference and North American Chapter of the Association for Computational Linguistics(HLT-NAACL), pages 113-120, 2004. [6] D. Lawrie and W. B. Croft. Discovering and comparing topic hierarchies. In Proceedings of RIAO 2000 Conference, pages 314-330, 1999. [7] David D. Lewis and Kimberly A. Knowles. Threading electronic mail: a preliminary study. Inf. Process. Manage., 33(2):209-217, 1997. [8] Juha Makkonen. Investigations on event evolution in tdt. In Proceedings of HLT-NAACL 2003 Student Workshop, pages 43-48, 2004. [9] Aixin Sun and Ee-Peng Lim. Hierarchical text classification and evaluation. In Proceedings of the 2001 IEEE International Conference on Data Mining, pages 521-528. IEEE Computer Society, 2001. [10] Yiming Yang, Jaime Carbonell, Ralf Brown, Thomas Pierce, Brian T. Archibald, and Xin Liu. Learning approaches for detecting and tracking news events. In IEEE Intelligent Systems Special Issue on Applications of Intelligent Information Retrieval, volume 14 (4), pages 32-43, 1999. 453
Event Threading within News Topics ABSTRACT With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them. 1. INTRODUCTION News forms a major portion of information disseminated in the world everyday. Common people and news analysts alike are very interested in keeping abreast of new things that happen in the news, but it is becoming very difficult to cope with the huge volumes of information that arrives each day. Hence there is an increasing need for automatic techniques to organize news stories in a way that helps users interpret and analyze them quickly. This problem is addressed by a research program called Topic Detection and Tracking (TDT) [3] that runs an open annual competition on standardized tasks of news organization. One of the shortcomings of current TDT evaluation is its view of news topics as flat collection of stories. For example, the detection task of TDT is to arrange a collection of news stories into clusters of topics. However, a topic in news is more than a mere collection of stories: it is characterized by a definite structure of inter-related events. This is indeed recognized by TDT which defines a topic as ` a set ofnews stories that are strongly related by some seminal realworld event' where an event is defined as ` something that happens at a specific time and location' [3]. For example, when a bomb explodes in a building, that is the seminal event that triggers the topic. Other events in the topic may include the rescue attempts, the search for perpetrators, arrests and trials and so on. We see that there is a pattern of dependencies between pairs of events in the topic. In the above example, the event of rescue attempts is ` influenced' by the event of bombing and so is the event of search for perpetrators. In this work we investigate methods for modeling the structure of a topic in terms of its events. By structure, we mean not only identifying the events that make up a topic, but also establishing dependencies--generally causal--among them. We call the process of recognizing events and identifying dependencies among them event threading, an analogy to email threading that shows connections between related email messages. We refer to the resulting interconnected structure of events as the event model of the topic. Although this paper focuses on threading events within an existing news topic, we expect that such event based dependency structure more accurately reflects the structure of news than strictly bounded topics do. From a user's perspective, we believe that our view of a news topic as a set of interconnected events helps him/her get a quick overview of the topic and also allows him/her navigate through the topic faster. The rest of the paper is organized as follows. In section 2, we discuss related work. In section 3, we define the problem and use an example to illustrate threading of events within a news topic. In section 4, we describe how we built the corpus for our problem. Section 5 presents our evaluation techniques while section 6 describes the techniques we use for modeling event structure. In section 7 we present our experiments and results. Section 8 concludes the paper with a few observations on our results and comments on future work. 2. RELATED WORK The process of threading events together is related to threading of electronic mail only by name for the most part. Email usually incorporates a strong structure of referenced messages and consistently formatted subject headings--though information retrieval techniques are useful when the structure breaks down [7]. Email threading captures reference dependencies between messages and does not attempt to reflect any underlying real-world structure of the matter under discussion. Another area of research that looks at the structure within a topic is hierarchical text classification of topics [9, 6]. The hierarchy within a topic does impose a structure on the topic, but we do not know of an effort to explore the extent to which that structure reflects the underlying event relationships. Barzilay and Lee [5] proposed a content structure modeling technique where topics within text are learnt using unsupervised methods, and a linear order of these topics is modeled using hidden Markov models. Our work differs from theirs in that we do not constrain the dependency to be linear. Also their algorithms are tuned to work on specific genres of topics such as earthquakes, accidents, etc., while we expect our algorithms to generalize over any topic. In TDT, researchers have traditionally considered topics as flatclusters [1]. However, in TDT-2003, a hierarchical structure of topic detection has been proposed and [2] made useful attempts to adopt the new structure. However this structure still did not explicitly model any dependencies between events. In a work closest to ours, Makkonen [8] suggested modeling news topics in terms of its evolving events. However, the paper stopped short of proposing any models to the problem. Other related work that dealt with analysis within a news topic includes temporal summarization of news topics [4]. 3. PROBLEM DEFINITION AND NOTATION In this work, we have adhered to the definition of event and topic as defined in TDT. We present some definitions (in italics) and our interpretations (regular-faced) below for clarity. 1. Story: A story is a news article delivering some information to users. In TDT, a story is assumed to refer to only a single topic. In this work, we also assume that each story discusses a single event. In other words, a story is the smallest atomic unit in the hierarchy (topic! event! story). Clearly, both the assumptions are not necessarily true in reality, but we accept them for simplicity in modeling. 2. Event: An event is something that happens at some specific time and place [10]. In our work, we represent an event by a set of stories that discuss it. Following the assumption of atomicity of a story, this means that any set of distinct events can be represented by a set of non-overlapping clusters of news stories. 3. Topic: A set of news stories strongly connected by a seminal event. We expand on this definition and interpret a topic as a series of related events. Thus a topic can be represented by clusters of stories each representing an event and a set of (directed or undirected) edges between pairs of these clusters representing the dependencies between these events. We will describe this representation of a topic in more detail in the next section. 4. Topic detection and tracking (TDT): Topic detection detects clusters of stories that discuss the same topic; Topic tracking detects stories that discuss a previously known topic [3]. Thus TDT concerns itself mainly with clustering stories into topics that discuss them. 5. Event threading: Event threading detects events within in a topic, and also captures the dependencies among the events. Thus the main difference between event threading and TDT is that we focus our modeling effort on microscopic events rather than larger topics. Additionally event threading models the relatedness or dependencies between pairs of events in a topic while TDT models topics as unrelated clusters of stories. We first define our problem and representation of our model formally and then illustrate with the help of an example. We are given a set of n news stories S = fs,, ~ ~ ~, sng on a given topic T and their time of publication. We define a set of events E _ While the first constraint says that each event is an element in the power set of S, the second constraint ensures that each story can belong to at most one event. The last constraint tells us that every story belongs to one of the events in E. In fact this allows us to define a mapping function f from stories to events as follows: Further, we also define a set of directed edges E _ f (Ei, Ei) g which denote dependencies between events. It is important to explain what we mean by this directional dependency: While the existence of an edge itself represents relatedness of two events, the direction could imply causality or temporal-ordering. By causal dependency we mean that the occurrence of event B is related to and is a consequence of the occurrence of event A. By temporal ordering, we mean that event B happened after event A and is related to A but is not necessarily a consequence of A. For example, consider the following two events: ` plane crash' (event A) and ` subsequent investigations' (event B) in a topic on a plane crash incident. Clearly, the investigations are a result of the crash. Hence an arrow from A to B falls under the category of causal dependency. Now consider the pair of events ` Pope arrives in Cuba' (event A) and ` Pope meets Castro' (event B) in a topic that discusses Pope's visit to Cuba. Now events A and B are closely related through their association with the Pope and Cuba but event B is not necessarily a consequence of the occurrence of event A. An arrow in such scenario captures what we call time ordering. In this work, we do not make an attempt to distinguish between these two kinds of dependencies and our models treats them as identical. A simpler (and hence less controversial) choice would be to ignore direction in the dependencies altogether and consider only undirected edges. This choice definitely makes sense as a first step but we chose the former since we believe directional edges make more sense to the user as they provide a more illustrative flow-chart perspective to the topic. To make the idea of event threading more concrete, consider the example of TDT3 topic 30005, titled ` Osama bin Laden's Indictment' (in the 1998 news). This topic has 23 stories which form 5 events. An event model of this topic can be represented as in figure 1. Each box in the figure indicates an event in the topic of Osama's indictment. The occurrence of event 2, namely ` Trial and Indictment of Osama' is dependent on the event of ` evidence gathered by CIA', i.e., event 1. Similarly, event 2 influences the occurrences of events 3, 4 and 5, namely ` Threats from Militants', ` Reactions from Muslim World' and ` announcement of reward'. Thus all the dependencies in the example are causal. Extending our notation further, we call an event A a parent of B and B the child of A, if (A, B) E E. We define an event model M = (S, E) to be a tuple of the set of events and set of dependencies. Figure 1: An event model of TDT topic ` Osama bin Laden's indictment'. Event threading is strongly related to topic detection and tracking, but also different from it significantly. It goes beyond topics, and models the relationships between events. Thus, event threading can be considered as a further extension of topic detection and tracking and is more challenging due to at least the following difficulties. 1. The number of events is unknown. 2. The granularity of events is hard to define. 3. The dependencies among events are hard to model. 4. Since it is a brand new research area, no standard evaluation metrics and benchmark data is available. In the next few sections, we will describe our attempts to tackle these problems. 4. LABELED DATA We picked 28 topics from the TDT2 corpus and 25 topics from the TDT3 corpus. The criterion we used for selecting a topic is that it should contain at least 15 on-topic stories from CNN headline news. If the topic contained more than 30 CNN stories, we picked only the first 30 stories to keep the topic short enough for annotators. The reason for choosing only CNN as the source is that the stories from this source tend to be short and precise and do not tend to digress or drift too far away from the central theme. We believe modeling such stories would be a useful first step before dealing with more complex data sets. We hired an annotator to create truth data. Annotation includes defining the event membership for each story and also the dependencies. We supervised the annotator on a set of three topics that we did our own annotations on and then asked her to annotate the 28 topics from TDT2 and 25 topics from TDT3. In identifying events in a topic, the annotator was asked to broadly follow the TDT definition of an event, i.e., ` something that happens at a specific time and location'. The annotator was encouraged to merge two events A and B into a single event C if any of the stories discusses both A and B. This is to satisfy our assumption that each story corresponds to a unique event. The annotator was also encouraged to avoid singleton events, events that contain a single news story, if possible. We realized from our own experience that people differ in their perception of an event especially when the number of stories in that event is small. As part of the guidelines, we instructed the annotator to assign titles to all the events in each topic. We believe that this would help make her understanding of the events more concrete. We however, do not use or model these titles in our algorithms. In defining dependencies between events, we imposed no restrictions on the graph structure. Each event could have single, multiple or no parents. Further, the graph could have cycles or orphannodes. The annotator was however instructed to assign a dependency from event A to event B if and only if the occurrence of B is ` either causally influenced by A or is closely related to A and follows A in time'. From the annotated topics, we created a training set of 26 topics and a test set of 27 topics by merging the 28 topics from TDT2 and 25 from TDT3 and splitting them randomly. Table 1 shows that the training and test sets have fairly similar statistics. Table 1: Statistics of annotated data 5. EVALUATION A system can generate some event model M' = (S', E') using certain algorithms, which is usually different from the truth model M = (S, E) (we assume the annotator did not make any mistake). Comparing a system event model M' with the true model M requires comparing the entire event models including their dependency structure. And different event granularities may bring huge discrepancy between M' and M. This is certainly non-trivial as even testing whether two graphs are isomorphic has no known polynomial time solution. Hence instead of comparing the actual structure we examine a pair of stories at a time and verify if the system and true labels agree on their event-memberships and dependencies. Specifically, we compare two kinds of story pairs: • Cluster pairs (C (M)): These are the complete set of unordered pairs (si, sj) of stories si and sj that fall within the same event given a model M. Formally, where f is the function in M that maps stories to events as defined in equation 4. • Dependency pairs (D (M)): These are the set of all ordered pairs of stories (si, sj) such that there is a dependency from the event of si to the event of sj in the model M. Note the story pair is ordered here, so (si, sj) is not equivalent to (sj, si). In our evaluation, a correct pair with wrong Figure 2: Evaluation measures direction will be considered a mistake. As we mentioned earlier in section 3, ignoring the direction may make the problem simpler, but we will lose the expressiveness of our representation. Given these two sets of story pairs corresponding to the true event model M and the system event model M', we define recall and precision for each category as follows. • Cluster Precision (CP): It is the probability that two randomly selected stories si and sj are in the same true-event given that they are in the same system event. where f' is the story-event mapping function corresponding to the model M'. • Cluster Recall (CR): It is the probability that two randomly selected stories si and sj are in the same system-event given that they are in the same true event. • Dependency Precision (DP): It is the probability that there is a dependency between the events of two randomly selected stories si and sj in the true model M given that they have a dependency in the system model M'. Note that the direction of dependency is important in comparison. • Dependency Recall (DR): It is the probability that there is a dependency between the events of two randomly selected stories si and sj in the system model M' given that they have a dependency in the true model M. Again, the direction of dependency is taken into consideration. The measures are illustrated by an example in figure 2. We also combine these measures using the well known F1-measure commonly used in text classification and other research areas as shown below. where CF and DF are the cluster and dependency F1-measures respectively and JF is the Joint F1-measure (JF) that we use to measure the overall performance. 6. TECHNIQUES The task of event modeling can be split into two parts: clustering the stories into unique events in the topic and constructing dependencies among them. In the following subsections, we describe techniques we developed for each of these sub-tasks. 6.1 Clustering Each topic is composed of multiple events, so stories must be clustered into events before we can model the dependencies among them. For simplicity, all stories in the same topic are assumed to be available at one time, rather than coming in a text stream. This task is similar to traditional clustering but features other than word distributions may also be critical in our application. In many text clustering systems, the similarity between two stories is the inner product of their tf-idf vectors, hence we use it as one of our features. Stories in the same event tend to follow temporal locality, so the time stamp of each story can be a useful feature. Additionally, named-entities such as person and location names are another obvious feature when forming events. Stories in the same event tend to be related to the same person (s) and locations (s). In this subsection, we present an agglomerative clustering algorithm that combines all these features. In our experiments, however, we study the effect of each feature on the performance separately using modified versions of this algorithm. 6.1.1 Agglomerative clustering with time decay (ACDT) We initialize our events to singleton events (clusters), i.e., each cluster contains exactly one story. So the similarity between two events, to start with, is exactly the similarity between the corresponding stories. The similarity wsum (s1; s2) between two stories s1 and s2 is given by the following formula: Here! 1,! 2,! 3 are the weights on different features. In this work, we determined them empirically, but in the future, one can consider more sophisticated learning techniques to determine them. cos (s1; s2) is the cosine similarity of the term vectors. Loc (s1; s2) is 1 if there is some location that appears in both stories, otherwise it is 0. Per (s1; s2) is similarly defined for person name. We use time decay when calculating similarity of story pairs, i.e., the larger time difference between two stories, the smaller their similarities. The time period of each topic differs a lot, from a few days to a few months. So we normalize the time difference using the whole duration of that topic. The time decay adjusted similarity where t1 and t2 are the time stamps for story 1 and 2 respectively. T is the time difference between the earliest and the latest story in the given topic. a is the time decay factor. In each iteration, we find the most similar event pair and merge them. We have three different ways to compute the similarity between two events Eu and Ev: ~ Average link: In this case the similarity is the average of the similarities of all pairs of stories between Eu and Ev as shown below: ~ Complete link: The similarity between two events is given by the smallest of the pair-wise similarities. ~ Single link: Here the similarity is given by the best similarity between all pairs of stories. This process continues until the maximum similarity falls below the threshold or the number of clusters is smaller than a given number. 6.2 Dependency modeling Capturing dependencies is an extremely hard problem because it may require a ` deeper understanding' of the events in question. A human annotator decides on dependencies not just based on the information in the events but also based on his/her vast repertoire of domain-knowledge and general understanding of how things operate in the world. For example, in Figure 1 a human knows ` Trial and indictment of Osama' is influenced by ` Evidence gathered by CIA' because he/she understands the process of law in general. We believe a robust model should incorporate such domain knowledge in capturing dependencies, but in this work, as a first step, we will rely on surface-features such as time-ordering of news stories and word distributions to model them. Our experiments in later sections demonstrate that such features are indeed useful in capturing dependencies to a large extent. In this subsection, we describe the models we considered for capturing dependencies. In the rest of the discussion in this subsection, we assume that we are already given the mapping f': S! E and we focus only on modeling the edges E'. First we define a couple of features that the following models will employ. First we define a 1-1 time-ordering function t: S! f1, ~ ~ ~, ng that sorts stories in ascending order by their time of publication. Now, the event-time-ordering function te is defined as follows. In other words, te time-orders events based on the time-ordering of their respective first stories. We will also use average cosine similarity between two events as a feature and it is defined as follows. 6.2.1 Complete-Link model In this model, we assume that there are dependencies between all pairs of events. The direction of dependency is determined by the time-ordering of the first stories in the respective events. Formally, the system edges are defined as follows. where te is the event-time-ordering function. In other words, the dependency edge is directed from event Eu to event Ev, if the first story in event Eu is earlier than the first story in event Ev. We point out that this is not to be confused with the complete-link algorithm in clustering. Although we use the same names, it will be clear from the context which one we refer to. 6.2.2 Simple Thresholding This model is an extension of the complete link model with an additional constraint that there is a dependency between any two events Eu and Ev only if the average cosine similarity between event Eu and event Ev is greater than a threshold T. Formally, 6.2.3 Nearest Parent Model In this model, we assume that each event can have at most one parent. We define the set of dependencies as follows. Thus, for each event Ev, the nearest parent model considers only the event preceding it as defined by te as a potential candidate. The candidate is assigned as the parent only if the average similarity exceeds a pre-defined threshold T. 6.2.4 Best Similarity Model This model also assumes that each event can have at most one parent. An event Ev is assigned a parent Eu if and only if Eu is the most similar earlier event to Ev and the similarity exceeds a threshold T. Mathematically, this can be expressed as: 6.2.5 Maximum Spanning Tree model In this model, we first build a maximum spanning tree (MST) using a greedy algorithm on the following fully connected weighted, undirected graph whose vertices are the events and whose edges ^ E are defined as follows: Let MST (^ E) be the set of edges in the maximum spanning tree of E'. Now our directed dependency edges E are defined as follows. Thus in this model, we assign dependencies between the most similar events in the topic. 7. EXPERIMENTS Our experiments consists of three parts. First we modeled only the event clustering part (defining the mapping function f') using clustering algorithms described in section 6.1. Then we modeled only the dependencies by providing to the system the true clusters and running only the dependency algorithms of section 6.2. Finally, we experimented with combinations of clustering and dependency algorithms to produce the complete event model. This way of experimentation allows us to compare the performance of our algorithms in isolation and in association with other components. The following subsections present the three parts of our experimentation. 7.1 Clustering We have tried several variations of the ACDT algorithm to study the effects of various features on the clustering performance. All the parameters are learned by tuning on the training set. We also tested the algorithms on the test set with parameters fixed at their optimal values learned from training. We used agglomerative clus Table 2: Comparison of agglomerative clustering algorithms (training set) tering based on only cosine similarity as our clustering baseline. The results on the training and test sets are in Table 2 and 3 respectively. We use the Cluster F1-measure (CF) averaged over all topics as our evaluation criterion. Table 3: Comparison of agglomerative clustering algorithms (test set) P-value marked with a * means that it is a statistically significant improvement over the baseline (95% confidence level, one tailed T-test). The methods shown in table 2 and 3 are: • Baseline: tf-idf vector weight, cosine similarity, average link in clustering. In equation 12, cw1 = 1, W2 = W3 = 0. And a = 0 in equation 13. This F-value is the maximum obtained by tuning the threshold. • cos +1 - lnk: Single link comparison (see equation 16) is used where similarity of two clusters is the maximum of all story pairs, other configurations are the same as the baseline run. • cos + all-lnk: Complete link algorithm of equation 15 is used. Similar to single link but it takes the minimum similarity of all story pairs. • cos + Loc + avg-lnk: Location names are used when calculating similarity. W2 = 0.05 in equation 12. All algorithms starting from this one use average link (equation 14), since single link and complete link do not show any improvement of performance. • cos + Per + avg-lnk: W3 = 0.05 in equation 12, i.e., we put some weight on person names in the similarity. • cos + TD + avg-lnk: Time Decay coefficient a = 1 in equation 13, which means the similarity between two stories will be decayed to 1/e if they are at different ends of the topic. • cos + N (T) + avg-lnk: Use the number of true events to control the agglomerative clustering algorithm. When the number of clusters is fewer than that of truth events, stop merging clusters. • cos + N (T) + T + avg-lnk: similar to N (T) but also stop agglomeration if the maximal similarity is below the threshold T. • cos + TD: + N (T) + avg-lnk: similar to N (T) but the similarities are decayed, a = 1 in equation 13. • cos + TD+N (T) + T + avg-lnk: similar to TD+N (Truth) but cal culation halts when the maximal similarity is smaller than the threshold T. Our experiments demonstrate that single link and complete link similarities perform worse than average link, which is reasonable since average link is less sensitive to one or two story pairs. We had expected locations and person names to improve the result, but it is not the case. Analysis of topics shows that many on-topic stories share the same locations or persons irrespective of the event they belong to, so these features may be more useful in identifying topics rather than events. Time decay is successful because events are temporally localized, i.e., stories discussing the same event tend to be adjacent to each other in terms of time. Also we noticed that providing the number of true events improves the performance since it guides the clustering algorithm to get correct granularity. However, for most applications, it is not available. We used it only as a "cheat" experiment for comparison with other algorithms. On the whole, time decay proved to the most powerful feature besides cosine similarity on both training and test sets. 7.2 Dependencies In this subsection, our goal is to model only dependencies. We use the true mapping function f and by implication the true events V. We build our dependency structure E' using all the five models described in section 6.2. We first train our models on the 26 training topics. Training involves learning the best threshold T for each of the models. We then test the performances of all the trained models on the 27 test topics. We evaluate our performance using the average values of Dependency Precision (DP), Dependency Recall (DR) and Dependency F-measure (DF). We consider the complete-link model to be our baseline since for each event, it trivially considers all earlier events to be parents. Table 4 lists the results on the training set. We see that while all the algorithms except MST outperform the baseline complete-link algorithm, the nearest Parent algorithm is statistically significant from the baseline in terms of its DF-value using a one-tailed paired T-test at 95% confidence level. Table 4: Results on the training set: Best T is the optimal value of the threshold T. * indicates the corresponding model is statistically significant compared to the baseline using a one-tailed, paired T-test at 95% confidence level. In table 5 we present the comparison of the models on the test set. Here, we do not use any tuning but set the threshold to the corresponding optimal values learned from the training set. The results throw some surprises: The nearest parent model, which was significantly better than the baseline on training set, turns out to be worse than the baseline on the test set. However all the other models are better than the baseline including the best similarity which is statistically significant. Notice that all the models that perform better than the baseline in terms of DF, actually sacrifice their recall performance compared to the baseline, but improve on their precision substantially thereby improving their performance on the DF-measure. We notice that both simple-thresholding and best similarity are better than the baseline on both training and test sets although the improvement is not significant. On the whole, we observe that the surface-level features we used capture the dependencies to a reasonable level achieving a best value of 0.72 DF on the test set. Although there is a lot of room for improvement, we believe this is a good first step. Table 5: Results on the test set 7.3 Combining Clustering and Dependencies Now that we have studied the clustering and dependency algorithms in isolation, we combine the best performing algorithms and build the entire event model. Since none of the dependency algorithms has been shown to be consistently and significantly better than the others, we use all of them in our experimentation. From the clustering techniques, we choose the best performing Cos + TD. As a baseline, we use a combination of the baselines in each components, i.e., cos for clustering and complete-link for dependencies. Note that we need to retrain all the algorithms on the training set because our objective function to optimize is now JF, the joint F-measure. For each algorithm, we need to optimize both the clustering threshold and the dependency threshold. We did this empirically on the training set and the optimal values are listed in table 6. The results on the training set, also presented in table 6, indicate that cos + TD+S imple-Thresholding is significantly better than the baseline in terms of the joint F-value JF, using a one-tailed paired Ttest at 95% confidence level. On the whole, we notice that while the clustering performance is comparable to the experiments in section 7.1, the overall performance is undermined by the low dependency performance. Unlike our experiments in section 7.2 where we had provided the true clusters to the system, in this case, the system has to deal with deterioration in the cluster quality. Hence the performance of the dependency algorithms has suffered substantially thereby lowering the overall performance. The results on the test set present a very similar story as shown in table 7. We also notice a fair amount of consistency in the performance of the combination algorithms. cos + TD+S imple-Thresholding outperforms the baseline significantly. The test set results also point to the fact that the clustering component remains a bottleneck in achieving an overall good performance.
Event Threading within News Topics ABSTRACT With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them. 1. INTRODUCTION News forms a major portion of information disseminated in the world everyday. Common people and news analysts alike are very interested in keeping abreast of new things that happen in the news, but it is becoming very difficult to cope with the huge volumes of information that arrives each day. Hence there is an increasing need for automatic techniques to organize news stories in a way that helps users interpret and analyze them quickly. This problem is addressed by a research program called Topic Detection and Tracking (TDT) [3] that runs an open annual competition on standardized tasks of news organization. One of the shortcomings of current TDT evaluation is its view of news topics as flat collection of stories. For example, the detection task of TDT is to arrange a collection of news stories into clusters of topics. However, a topic in news is more than a mere collection of stories: it is characterized by a definite structure of inter-related events. This is indeed recognized by TDT which defines a topic as ` a set ofnews stories that are strongly related by some seminal realworld event' where an event is defined as ` something that happens at a specific time and location' [3]. For example, when a bomb explodes in a building, that is the seminal event that triggers the topic. Other events in the topic may include the rescue attempts, the search for perpetrators, arrests and trials and so on. We see that there is a pattern of dependencies between pairs of events in the topic. In the above example, the event of rescue attempts is ` influenced' by the event of bombing and so is the event of search for perpetrators. In this work we investigate methods for modeling the structure of a topic in terms of its events. By structure, we mean not only identifying the events that make up a topic, but also establishing dependencies--generally causal--among them. We call the process of recognizing events and identifying dependencies among them event threading, an analogy to email threading that shows connections between related email messages. We refer to the resulting interconnected structure of events as the event model of the topic. Although this paper focuses on threading events within an existing news topic, we expect that such event based dependency structure more accurately reflects the structure of news than strictly bounded topics do. From a user's perspective, we believe that our view of a news topic as a set of interconnected events helps him/her get a quick overview of the topic and also allows him/her navigate through the topic faster. The rest of the paper is organized as follows. In section 2, we discuss related work. In section 3, we define the problem and use an example to illustrate threading of events within a news topic. In section 4, we describe how we built the corpus for our problem. Section 5 presents our evaluation techniques while section 6 describes the techniques we use for modeling event structure. In section 7 we present our experiments and results. Section 8 concludes the paper with a few observations on our results and comments on future work. 2. RELATED WORK The process of threading events together is related to threading of electronic mail only by name for the most part. Email usually incorporates a strong structure of referenced messages and consistently formatted subject headings--though information retrieval techniques are useful when the structure breaks down [7]. Email threading captures reference dependencies between messages and does not attempt to reflect any underlying real-world structure of the matter under discussion. Another area of research that looks at the structure within a topic is hierarchical text classification of topics [9, 6]. The hierarchy within a topic does impose a structure on the topic, but we do not know of an effort to explore the extent to which that structure reflects the underlying event relationships. Barzilay and Lee [5] proposed a content structure modeling technique where topics within text are learnt using unsupervised methods, and a linear order of these topics is modeled using hidden Markov models. Our work differs from theirs in that we do not constrain the dependency to be linear. Also their algorithms are tuned to work on specific genres of topics such as earthquakes, accidents, etc., while we expect our algorithms to generalize over any topic. In TDT, researchers have traditionally considered topics as flatclusters [1]. However, in TDT-2003, a hierarchical structure of topic detection has been proposed and [2] made useful attempts to adopt the new structure. However this structure still did not explicitly model any dependencies between events. In a work closest to ours, Makkonen [8] suggested modeling news topics in terms of its evolving events. However, the paper stopped short of proposing any models to the problem. Other related work that dealt with analysis within a news topic includes temporal summarization of news topics [4]. 3. PROBLEM DEFINITION AND NOTATION 4. LABELED DATA 5. EVALUATION 6. TECHNIQUES 6.1 Clustering 6.1.1 Agglomerative clustering with time decay (ACDT) 6.2 Dependency modeling 6.2.1 Complete-Link model 6.2.2 Simple Thresholding 6.2.3 Nearest Parent Model 6.2.4 Best Similarity Model 6.2.5 Maximum Spanning Tree model 7. EXPERIMENTS 7.1 Clustering 7.2 Dependencies 7.3 Combining Clustering and Dependencies
Event Threading within News Topics ABSTRACT With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them. 1. INTRODUCTION News forms a major portion of information disseminated in the world everyday. of information that arrives each day. Hence there is an increasing need for automatic techniques to organize news stories in a way that helps users interpret and analyze them quickly. This problem is addressed by a research program called Topic Detection and Tracking (TDT) [3] that runs an open annual competition on standardized tasks of news organization. One of the shortcomings of current TDT evaluation is its view of news topics as flat collection of stories. For example, the detection task of TDT is to arrange a collection of news stories into clusters of topics. However, a topic in news is more than a mere collection of stories: it is characterized by a definite structure of inter-related events. For example, when a bomb explodes in a building, that is the seminal event that triggers the topic. Other events in the topic may include the rescue attempts, the search for perpetrators, arrests and trials and so on. We see that there is a pattern of dependencies between pairs of events in the topic. In the above example, the event of rescue attempts is ` influenced' by the event of bombing and so is the event of search for perpetrators. In this work we investigate methods for modeling the structure of a topic in terms of its events. By structure, we mean not only identifying the events that make up a topic, but also establishing dependencies--generally causal--among them. We call the process of recognizing events and identifying dependencies among them event threading, an analogy to email threading that shows connections between related email messages. We refer to the resulting interconnected structure of events as the event model of the topic. Although this paper focuses on threading events within an existing news topic, we expect that such event based dependency structure more accurately reflects the structure of news than strictly bounded topics do. From a user's perspective, we believe that our view of a news topic as a set of interconnected events helps him/her get a quick overview of the topic and also allows him/her navigate through the topic faster. In section 2, we discuss related work. In section 3, we define the problem and use an example to illustrate threading of events within a news topic. In section 4, we describe how we built the corpus for our problem. Section 5 presents our evaluation techniques while section 6 describes the techniques we use for modeling event structure. In section 7 we present our experiments and results. Section 8 concludes the paper with a few observations on our results and comments on future work. 2. RELATED WORK The process of threading events together is related to threading of electronic mail only by name for the most part. Email threading captures reference dependencies between messages and does not attempt to reflect any underlying real-world structure of the matter under discussion. Another area of research that looks at the structure within a topic is hierarchical text classification of topics [9, 6]. The hierarchy within a topic does impose a structure on the topic, but we do not know of an effort to explore the extent to which that structure reflects the underlying event relationships. Our work differs from theirs in that we do not constrain the dependency to be linear. In TDT, researchers have traditionally considered topics as flatclusters [1]. However, in TDT-2003, a hierarchical structure of topic detection has been proposed and [2] made useful attempts to adopt the new structure. However this structure still did not explicitly model any dependencies between events. In a work closest to ours, Makkonen [8] suggested modeling news topics in terms of its evolving events. However, the paper stopped short of proposing any models to the problem. Other related work that dealt with analysis within a news topic includes temporal summarization of news topics [4].
C-72
GUESS: Gossiping Updates for Efficient Spectrum Sensing
Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum. Such radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings. We focus on the problem of sharing RF spectrum data among a collection of wireless devices. The inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging. We propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing. It (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence. We outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches.
[ "spectrum sens", "rf spectrum", "rf interfer", "cognit radio", "spectrum alloc", "coordin sens", "fm aggreg", "increment gossip protocol", "opportunist spectrum share", "spatial decai aggreg", "innetwork aggreg", "coordin spectrum sens", "gossip protocol", "increment algorithm" ]
[ "P", "P", "M", "M", "M", "R", "U", "R", "R", "U", "U", "R", "R", "M" ]
GUESS: Gossiping Updates for Efficient Spectrum Sensing Nabeel Ahmed University of Waterloo David R. Cheriton School of Computer Science n3ahmed@uwaterloo.ca David Hadaller University of Waterloo David R. Cheriton School of Computer Science dthadaller@uwaterloo.ca Srinivasan Keshav University of Waterloo David R. Cheriton School of Computer Science keshav@uwaterloo.ca ABSTRACT Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum. Such radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings. We focus on the problem of sharing RF spectrum data among a collection of wireless devices. The inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging. We propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing. It (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence. We outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches. Categories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed applications General Terms Algorithms, Performance, Experimentation 1. INTRODUCTION There has recently been a huge surge in the growth of wireless technology, driven primarily by the availability of unlicensed spectrum. However, this has come at the cost of increased RF interference, which has caused the Federal Communications Commission (FCC) in the United States to re-evaluate its strategy on spectrum allocation. Currently, the FCC has licensed RF spectrum to a variety of public and private institutions, termed primary users. New spectrum allocation regimes implemented by the FCC use dynamic spectrum access schemes to either negotiate or opportunistically allocate RF spectrum to unlicensed secondary users Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific D1 D2 D5 D3 D4 Primary User Shadowed Secondary Users Secondary Users detect Primary's Signal Shadowed Secondary User Figure 1: Without cooperation, shadowed users are not able to detect the presence of the primary user. that can use it when the primary user is absent. The second type of allocation scheme is termed opportunistic spectrum sharing. The FCC has already legislated this access method for the 5 GHz band and is also considering the same for TV broadcast bands [1]. As a result, a new wave of intelligent radios, termed cognitive radios (or software defined radios), is emerging that can dynamically re-tune their radio parameters based on interactions with their surrounding environment. Under the new opportunistic allocation strategy, secondary users are obligated not to interfere with primary users (senders or receivers). This can be done by sensing the environment to detect the presence of primary users. However, local sensing is not always adequate, especially in cases where a secondary user is shadowed from a primary user, as illustrated in Figure 1. Here, coordination between secondary users is the only way for shadowed users to detect the primary. In general, cooperation improves sensing accuracy by an order of magnitude when compared to not cooperating at all [5]. To realize this vision of dynamic spectrum access, two fundamental problems must be solved: (1) Efficient and coordinated spectrum sensing and (2) Distributed spectrum allocation. In this paper, we propose strategies for coordinated spectrum sensing that are low cost, operate on timescales comparable to the agility of the RF environment, and are resilient to network failures and alterations. We defer the problem of spectrum allocation to future work. Spectrum sensing techniques for cognitive radio networks [4, 17] are broadly classified into three regimes; (1) centralized coordinated techniques, (2) decentralized coordinated techniques, and (3) decentralized uncoordinated techniques. We advocate a decentralized coordinated approach, similar in spirit to OSPF link-state routing used in the Internet. This is more effective than uncoordinated approaches because making decisions based only on local information is fallible (as shown in Figure 1). Moreover, compared to cen12 tralized approaches, decentralized techniques are more scalable, robust, and resistant to network failures and security attacks (e.g. jamming). Coordinating sensory data between cognitive radio devices is technically challenging because accurately assessing spectrum usage requires exchanging potentially large amounts of data with many radios at very short time scales. Data size grows rapidly due to the large number (i.e. thousands) of spectrum bands that must be scanned. This data must also be exchanged between potentially hundreds of neighboring secondary users at short time scales, to account for rapid changes in the RF environment. This paper presents GUESS, a novel approach to coordinated spectrum sensing for cognitive radio networks. Our approach is motivated by the following key observations: 1. Low-cost sensors collect approximate data: Most devices have limited sensing resolution because they are low-cost and low duty-cycle devices and thus cannot perform complex RF signal processing (e.g. matched filtering). Many are typically equipped with simple energy detectors that gather only approximate information. 2. Approximate summaries are sufficient for coordination: Approximate statistical summaries of sensed data are sufficient for correlating sensed information between radios, as relative usage information is more important than absolute usage data. Thus, exchanging exact RF information may not be necessary, and more importantly, too costly for the purposes of spectrum sensing. 3. RF spectrum changes incrementally: On most bands, RF spectrum utilization changes infrequently. Moreover, utilization of a specific RF band affects only that band and not the entire spectrum. Therefore, if the usage pattern of a particular band changes substantially, nodes detecting that change can initiate an update protocol to update the information for that band alone, leaving in place information already collected for other bands. This allows rapid detection of change while saving the overhead of exchanging unnecessary information. Based on these observations, GUESS makes the following contributions: 1. A novel approach that applies randomized gossiping algorithms to the problem of coordinated spectrum sensing. These algorithms are well suited to coordinated spectrum sensing due to the unique characteristics of the problem: i.e. radios are power-limited, mobile and have limited bandwidth to support spectrum sensing capabilities. 2. An application of in-network aggregation for dissemination of spectrum summaries. We argue that approximate summaries are adequate for performing accurate radio parameter tuning. 3. An extension of in-network aggregation and randomized gossiping to support incremental maintenance of spectrum summaries. Compared to standard gossiping approaches, incremental techniques can further reduce overhead and protocol execution time by requiring fewer radio resources. The rest of the paper is organized as follows. Section 2 motivates the need for a low cost and efficient approach to coordinated spectrum sensing. Section 3 discusses related work in the area, while Section 4 provides a background on in-network aggregation and randomized gossiping. Sections 5 and 6 discuss extensions and protocol details of these techniques for coordinated spectrum sensing. Section 7 presents simulation results showcasing the benefits of GUESS, and Section 8 presents a discussion and some directions for future work. 2. MOTIVATION To estimate the scale of the problem, In-stat predicts that the number of WiFi-enabled devices sold annually alone will grow to 430 million by 2009 [2]. Therefore, it would be reasonable to assume that a typical dense urban environment will contain several thousand cognitive radio devices in range of each other. As a result, distributed spectrum sensing and allocation would become both important and fundamental. Coordinated sensing among secondary radios is essential due to limited device sensing resolution and physical RF effects such as shadowing. Cabric et al. [5] illustrate the gains from cooperation and show an order of magnitude reduction in the probability of interference with the primary user when only a small fraction of secondary users cooperate. However, such coordination is non-trivial due to: (1) the limited bandwidth available for coordination, (2) the need to communicate this information on short timescales, and (3) the large amount of sensory data that needs to be exchanged. Limited Bandwidth: Due to restrictions of cost and power, most devices will likely not have dedicated hardware for supporting coordination. This implies that both data and sensory traffic will need to be time-multiplexed onto a single radio interface. Therefore, any time spent communicating sensory information takes away from the device``s ability to perform its intended function. Thus, any such coordination must incur minimal network overhead. Short Timescales: Further compounding the problem is the need to immediately propagate updated RF sensory data, in order to allow devices to react to it in a timely fashion. This is especially true due to mobility, as rapid changes of the RF environment can occur due to device and obstacle movements. Here, fading and multi-path interference heavily impact sensing abilities. Signal level can drop to a deep null with just a λ/4 movement in receiver position (3.7 cm at 2 GHz), where λ is the wavelength [14]. Coordination which does not support rapid dissemination of information will not be able to account for such RF variations. Large Sensory Data: Because cognitive radios can potentially use any part of the RF spectrum, there will be numerous channels that they need to scan. Suppose we wish to compute the average signal energy in each of 100 discretized frequency bands, and each signal can have up to 128 discrete energy levels. Exchanging complete sensory information between nodes would require 700 bits per transmission (for 100 channels, each requiring seven bits of information). Exchanging this information among even a small group of 50 devices each second would require (50 time-steps × 50 devices × 700 bits per transmission) = 1.67 Mbps of aggregate network bandwidth. Contrast this to the use of a randomized gossip protocol to disseminate such information, and the use of FM bit vectors to perform in-network aggregation. By applying gossip and FM aggregation, aggregate bandwidth requirements drop to (c·logN time-steps × 50 devices × 700 bits per transmission) = 0.40 Mbps, since 12 time-steps are needed to propagate the data (with c = 2, for illustrative purpoes1 ). This is explained further in Section 4. Based on these insights, we propose GUESS, a low-overhead approach which uses incremental extensions to FM aggregation and randomized gossiping for efficient coordination within a cognitive radio network. As we show in Section 7, 1 Convergence time is correlated with the connectivity topology of the devices, which in turn depends on the environment. 13 X A A X B B X Figure 2: Using FM aggregation to compute average signal level measured by a group of devices. these incremental extensions can further reduce bandwidth requirements by up to a factor of 2.4 over the standard approaches discussed above. 3. RELATED WORK Research in cognitive radio has increased rapidly [4, 17] over the years, and it is being projected as one of the leading enabling technologies for wireless networks of the future [9]. As mentioned earlier, the FCC has already identified new regimes for spectrum sharing between primary users and secondary users and a variety of systems have been proposed in the literature to support such sharing [4, 17]. Detecting the presence of a primary user is non-trivial, especially a legacy primary user that is not cognitive radio aware. Secondary users must be able to detect the primary even if they cannot properly decode its signals. This has been shown by Sahai et al. [16] to be extremely difficult even if the modulation scheme is known. Sophisticated and costly hardware, beyond a simple energy detector, is required to improve signal detection accuracy [16]. Moreover, a shadowed secondary user may not even be able to detect signals from the primary. As a result, simple local sensing approaches have not gained much momentum. This has motivated the need for cooperation among cognitive radios [16]. More recently, some researchers have proposed approaches for radio coordination. Liu et al. [11] consider a centralized access point (or base station) architecture in which sensing information is forwarded to APs for spectrum allocation purposes. APs direct mobile clients to collect such sensing information on their behalf. However, due to the need of a fixed AP infrastructure, such a centralized approach is clearly not scalable. In other work, Zhao et al. [17] propose a distributed coordination approach for spectrum sensing and allocation. Cognitive radios organize into clusters and coordination occurs within clusters. The CORVUS [4] architecture proposes a similar clustering method that can use either a centralized or decentralized approach to manage clusters. Although an improvement over purely centralized approaches, these techniques still require a setup phase to generate the clusters, which not only adds additional delay, but also requires many of the secondary users to be static or quasi-static. In contrast, GUESS does not place such restrictions on secondary users, and can even function in highly mobile environments. 4. BACKGROUND This section provides the background for our approach. We present the FM aggregation scheme that we use to generate spectrum summaries and perform in-network aggregation. We also discuss randomized gossiping techniques for disseminating aggregates in a cognitive radio network. 4.1 FM Aggregation Aggregation is the process where nodes in a distributed network combine data received from neighboring nodes with their local value to generate a combined aggregate. This aggregate is then communicated to other nodes in the network and this process repeats until the aggregate at all nodes has converged to the same value, i.e. the global aggregate. Double-counting is a well known problem in this process, where nodes may contribute more than once to the aggregate, causing inaccuracy in the final result. Intuitively, nodes can tag the aggregate value they transmit with information about which nodes have contributed to it. However, this approach is not scalable. Order and Duplicate Insensitive (ODI) techniques have been proposed in the literature [10, 15]. We adopt the ODI approach pioneered by Flajolet and Martin (FM) for the purposes of aggregation. Next we outline the FM approach; for full details, see [7]. Suppose we want to compute the number of nodes in the network, i.e. the COUNT query. To do so, each node performs a coin toss experiment as follows: toss an unbiased coin, stopping after the first head is seen. The node then sets the ith bit in a bit vector (initially filled with zeros), where i is the number of coin tosses it performed. The intuition is that as the number of nodes doing coin toss experiments increases, the probability of a more significant bit being set in one of the nodes'' bit vectors increases. These bit vectors are then exchanged among nodes. When a node receives a bit vector, it updates its local bit vector by bitwise OR-ing it with the received vector (as shown in Figure 2 which computes AVERAGE). At the end of the aggregation process, every node, with high probability, has the same bit vector. The actual value of the count aggregate is then computed using the following formula, AGGF M = 2j−1 /0.77351, where j represents the bit position of the least significant zero in the aggregate bit vector [7]. Although such aggregates are very compact in nature, requiring only O(logN) state space (where N is the number of nodes), they may not be very accurate as they can only approximate values to the closest power of 2, potentially causing errors of up to 50%. More accurate aggregates can be computed by maintaining multiple bit vectors at each node, as explained in [7]. This decreases the error to within O(1/ √ m), where m is the number of such bit vectors. Queries other than count can also be computed using variants of this basic counting algorithm, as discussed in [3] (and shown in Figure 2). Transmitting FM bit vectors between nodes is done using randomized gossiping, discussed next. 4.2 Gossip Protocols Gossip-based protocols operate in discrete time-steps; a time-step is the required amount of time for all transmissions in that time-step to complete. At every time-step, each node having something to send randomly selects one or more neighboring nodes and transmits its data to them. The randomized propagation of information provides fault-tolerance and resilience to network failures and outages. We emphasize that this characteristic of the protocol also allows it to operate without relying on any underlying network structure. Gossip protocols have been shown to provide exponentially fast convergence2 , on the order of O(log N) [10], where N is the number of nodes (or radios). These protocols can therefore easily scale to very dense environments. 2 Convergence refers to the state in which all nodes have the most up-to-date view of the network. 14 Two types of gossip protocols are: • Uniform Gossip: In uniform gossip, at each timestep, each node chooses a random neighbor and sends its data to it. This process repeats for O(log(N)) steps (where N is the number of nodes in the network). Uniform gossip provides exponentially fast convergence, with low network overhead [10]. • Random Walk: In random walk, only a subset of the nodes (termed designated nodes) communicate in a particular time-step. At startup, k nodes are randomly elected as designated nodes. In each time-step, each designated node sends its data to a random neighbor, which becomes designated for the subsequent timestep (much like passing a token). This process repeats until the aggregate has converged in the network. Random walk has been shown to provide similar convergence bounds as uniform gossip in problems of similar context [8, 12]. 5. INCREMENTAL PROTOCOLS 5.1 Incremental FM Aggregates One limitation of FM aggregation is that it does not support updates. Due to the probabilistic nature of FM, once bit vectors have been ORed together, information cannot simply be removed from them as each node``s contribution has not been recorded. We propose the use of delete vectors, an extension of FM to support updates. We maintain a separate aggregate delete vector whose value is subtracted from the original aggregate vector``s value to obtain the resulting value as follows. AGGINC = (2a−1 /0.77351) − (2b−1 /0.77351) (1) Here, a and b represent the bit positions of the least significant zero in the original and delete bit vectors respectively. Suppose we wish to compute the average signal level detected in a particular frequency. To compute this, we compute the SUM of all signal level measurements and divide that by the COUNT of the number of measurements. A SUM aggregate is computed similar to COUNT (explained in Section 4.1), except that each node performs s coin toss experiments, where s is the locally measured signal level. Figure 2 illustrates the sequence by which the average signal energy is computed in a particular band using FM aggregation. Now suppose that the measured signal at a node changes from s to s . The vectors are updated as follows. • s > s: We simply perform (s − s) more coin toss experiments and bitwise OR the result with the original bit vector. • s < s: We increase the value of the delete vector by performing (s − s ) coin toss experiments and bitwise OR the result with the current delete vector. Using delete vectors, we can now support updates to the measured signal level. With the original implementation of FM, the aggregate would need to be discarded and a new one recomputed every time an update occurred. Thus, delete vectors provide a low overhead alternative for applications whose data changes incrementally, such as signal level measurements in a coordinated spectrum sensing environment. Next we discuss how these aggregates can be communicated between devices using incremental routing protocols. 5.2 Incremental Routing Protocol We use the following incremental variants of the routing protocols presented in Section 4.2 to support incremental updates to previously computed aggregates. Update Received OR Local Update Occurs Recovered Susceptible Time-stamp Expires Initial State Additional Update Received Infectious Clean Up Figure 3: State diagram each device passes through as updates proceed in the system • Incremental Gossip Protocol (IGP): When an update occurs, the updated node initiates the gossiping procedure. Other nodes only begin gossiping once they receive the update. Therefore, nodes receiving the update become active and continue communicating with their neighbors until the update protocol terminates, after O(log(N)) time steps. • Incremental Random Walk Protocol (IRWP): When an update (or updates) occur in the system, instead of starting random walks at k random nodes in the network, all k random walks are initiated from the updated node(s). The rest of the protocol proceeds in the same fashion as the standard random walk protocol. The allocation of walks to updates is discussed in more detail in [3], where the authors show that the number of walks has an almost negligible impact on network overhead. 6. PROTOCOL DETAILS Using incremental routing protocols to disseminate incremental FM aggregates is a natural fit for the problem of coordinated spectrum sensing. Here we outline the implementation of such techniques for a cognitive radio network. We continue with the example from Section 5.1, where we wish to perform coordination between a group of wireless devices to compute the average signal level in a particular frequency band. Using either incremental random walk or incremental gossip, each device proceeds through three phases, in order to determine the global average signal level for a particular frequency band. Figure 3 shows a state diagram of these phases. Susceptible: Each device starts in the susceptible state and becomes infectious only when its locally measured signal level changes, or if it receives an update message from a neighboring device. If a local change is observed, the device updates either the original or delete bit vector, as described in Section 5.1, and moves into the infectious state. If it receives an update message, it ORs the received original and delete bit vectors with its local bit vectors and moves into the infectious state. Note, because signal level measurements may change sporadically over time, a smoothing function, such as an exponentially weighted moving average, should be applied to these measurements. Infectious: Once a device is infectious it continues to send its up-to-date bit vectors, using either incremental random walk or incremental gossip, to neighboring nodes. Due to FM``s order and duplicate insensitive (ODI) properties, simultaneously occurring updates are handled seamlessly by the protocol. Update messages contain a time stamp indicating when the update was generated, and each device maintains a lo15 0 200 400 600 800 1000 1 10 100 Number of Measured Signal Changes Executiontime(ms) Incremental Gossip Uniform Gossip (a) Incremental Gossip and Uniform Gossip on Clique 0 200 400 600 800 1000 1 10 100 Number of Measured Signal Changes ExecutionTime(ms). Incremental Random Walk Random Walk (b) Incremental Random Walk and Random Walk on Clique 0 400 800 1200 1600 2000 1 10 100 Number of Measured Signal Changes ExecutionTime(ms). Random Walk Incremental Random Walk (c) Incremental Random Walk and Random Walk on Power-Law Random Graph Figure 4: Execution times of Incremental Protocols 0.9 1.4 1.9 2.4 2.9 1 10 100 Number of Measured Signal Changes OverheadImprovementRatio. (NormalizedtoUniformGossip) Incremental Gossip Uniform Gossip (a) Incremental Gossip and Uniform Gossip on Clique 0.9 1.4 1.9 2.4 2.9 1 10 100 Number of Measured Signal Changes OverheadImprovementRatio. (NormalizedtoRandomWalk) Incremental Random Walk Random Walk (b) Incremental Random Walk and Random Walk on Clique 0.9 1.1 1.3 1.5 1.7 1.9 1 10 100 Number of Measured Signal Changes OverheadImprovementRatio. (NormalizedtoRandomWalk) Random Walk Incremental Random Walk (c) Incremental Random Walk and Random Walk on Power-Law Random Graph Figure 5: Network overhead of Incremental Protocols cal time stamp of when it received the most recent update. Using this information, a device moves into the recovered state once enough time has passed for the most recent update to have converged. As discussed in Section 4.2, this happens after O(log(N)) time steps. Recovered: A recovered device ceases to propagate any update information. At this point, it performs clean-up and prepares for the next infection by entering the susceptible state. Once all devices have entered the recovered state, the system will have converged, and with high probability, all devices will have the up-to-date average signal level. Due to the cumulative nature of FM, even if all devices have not converged, the next update will include all previous updates. Nevertheless, the probability that gossip fails to converge is small, and has been shown to be O(1/N) [10]. For coordinated spectrum sensing, non-incremental routing protocols can be implemented in a similar fashion. Random walk would operate by having devices periodically drop the aggregate and re-run the protocol. Each device would perform a coin toss (biased on the number of walks) to determine whether or not it is a designated node. This is different from the protocol discussed above where only updated nodes initiate random walks. Similar techniques can be used to implement standard gossip. 7. EVALUATION We now provide a preliminary evaluation of GUESS in simulation. A more detailed evaluation of this approach can be found in [3]. Here we focus on how incremental extensions to gossip protocols can lead to further improvements over standard gossiping techniques, for the problem of coordinated spectrum sensing. Simulation Setup: We implemented a custom simulator in C++. We study the improvements of our incremental gossip protocols over standard gossiping in two dimensions: execution time and network overhead. We use two topologies to represent device connectivity: a clique, to eliminate the effects of the underlying topology on protocol performance, and a BRITE-generated [13] power-law random graph (PLRG), to illustrate how our results extend to more realistic scenarios. We simulate a large deployment of 1,000 devices to analyze protocol scalability. In our simulations, we compute the average signal level in a particular band by disseminating FM bit vectors. In each run of the simulation, we induce a change in the measured signal at one or more devices. A run ends when the new average signal level has converged in the network. For each data point, we ran 100 simulations and 95% confidence intervals (error bars) are shown. Simulation Parameters: Each transmission involves sending 70 bits of information to a neighboring node. To compute the AVERAGE aggregate, four bit vectors need to be transmitted: the original SUM vector, the SUM delete vector, the original COUNT vector, and the COUNT delete vector. Non-incremental protocols do not transmit the delete vectors. Each transmission also includes a time stamp of when the update was generated. We assume nodes communicate on a common control channel at 2 Mbps. Therefore, one time-step of protocol execution corresponds to the time required for 1,000 nodes to sequentially send 70 bits at 2 Mbps. Sequential use of the control channel is a worst case for our protocols; in practice, multiple control channels could be used in parallel to reduce execution time. We also assume nodes are loosely time synchronized, the implications of which are discussed further in [3]. Finally, in order to isolate the effect of protocol operation on performance, we do not model the complexities of the wireless channel in our simulations. Incremental Protocols Reduce Execution Time: Figure 4(a) compares the performance of incremental gossip (IGP) with uniform gossip on a clique topology. We observe that both protocols have almost identical execution times. This is expected as IGP operates in a similar fashion to 16 uniform gossip, taking O(log(N)) time-steps to converge. Figure 4(b) compares the execution times of incremental random walk (IRWP) and standard random walk on a clique. IRWP reduces execution time by a factor of 2.7 for a small number of measured signal changes. Although random walk and IRWP both use k random walks (in our simulations k = number of nodes), IRWP initiates walks only from updated nodes (as explained in Section 5.2), resulting in faster information convergence. These improvements carry over to a PLRG topology as well (as shown in Figure 4(c)), where IRWP is 1.33 times faster than random walk. Incremental Protocols Reduce Network Overhead: Figure 5(a) shows the ratio of data transmitted using uniform gossip relative to incremental gossip on a clique. For a small number of signal changes, incremental gossip incurs 2.4 times less overhead than uniform gossip. This is because in the early steps of protocol execution, only devices which detect signal changes communicate. As more signal changes are introduced into the system, gossip and incremental gossip incur approximately the same overhead. Similarly, incremental random walk (IRWP) incurs much less overhead than standard random walk. Figure 5(b) shows a 2.7 fold reduction in overhead for small numbers of signal changes on a clique. Although each protocol uses the same number of random walks, IRWP uses fewer network resources than random walk because it takes less time to converge. This improvement also holds true on more complex PLRG topologies (as shown in Figure 5(c)), where we observe a 33% reduction in network overhead. From these results it is clear that incremental techniques yield significant improvements over standard approaches to gossip, even on complex topologies. Because spectrum utilization is characterized by incremental changes to usage, incremental protocols are ideally suited to solve this problem in an efficient and cost effective manner. 8. DISCUSSION AND FUTURE WORK We have only just scratched the surface in addressing the problem of coordinated spectrum sensing using incremental gossiping. Next, we outline some open areas of research. Spatial Decay: Devices performing coordinated sensing are primarily interested in the spectrum usage of their local neighborhood. Therefore, we recommend the use of spatially decaying aggregates [6], which limits the impact of an update on more distant nodes. Spatially decaying aggregates work by successively reducing (by means of a decay function) the value of the update as it propagates further from its origin. One challenge with this approach is that propagation distance cannot be determined ahead of time and more importantly, exhibits spatio-temporal variations. Therefore, finding the optimal decay function is non-trivial, and an interesting subject of future work. Significance Threshold: RF spectrum bands continually experience small-scale changes which may not necessarily be significant. Deciding if a change is significant can be done using a significance threshold β, below which any observed change is not propagated by the node. Choosing an appropriate operating value for β is application dependent, and explored further in [3]. Weighted Readings: Although we argued that most devices will likely be equipped with low-cost sensing equipment, there may be situations where there are some special infrastructure nodes that have better sensing abilities than others. Weighting their measurements more heavily could be used to maintain a higher degree of accuracy. Determining how to assign such weights is an open area of research. Implementation Specifics: Finally, implementing gossip for coordinated spectrum sensing is also open. If implemented at the MAC layer, it may be feasible to piggy-back gossip messages over existing management frames (e.g. networking advertisement messages). As well, we also require the use of a control channel to disseminate sensing information. There are a variety of alternatives for implementing such a channel, some of which are outlined in [4]. The trade-offs of different approaches to implementing GUESS is a subject of future work. 9. CONCLUSION Spectrum sensing is a key requirement for dynamic spectrum allocation in cognitive radio networks. The nature of the RF environment necessitates coordination between cognitive radio devices. We propose GUESS, an approximate yet low overhead approach to perform efficient coordination between cognitive radios. The fundamental contributions of GUESS are: (1) an FM aggregation scheme for efficient innetwork aggregation, (2) a randomized gossiping approach which provides exponentially fast convergence and robustness to network alterations, and (3) incremental variations of FM and gossip which we show can reduce the communication time by up to a factor of 2.7 and reduce network overhead by up to a factor of 2.4. Our preliminary simulation results showcase the benefits of this approach and we also outline a set of open problems that make this a new and exciting area of research. 10. REFERENCES [1] Unlicensed Operation in the TV Broadcast Bands and Additional Spectrum for Unlicensed Devices Below 900 MHz in the 3 GHz band, May 2004. Notice of Proposed Rule-Making 04-186, Federal Communications Commission. [2] In-Stat: Covering the Full Spectrum of Digital Communications Market Research, from Vendor to End-user, December 2005. http://www.in-stat.com/catalog/scatalogue.asp?id=28. [3] N. Ahmed, D. Hadaller, and S. Keshav. Incremental Maintenance of Global Aggregates. UW. Technical Report CS-2006-19, University of Waterloo, ON, Canada, 2006. [4] R. W. Brodersen, A. Wolisz, D. Cabric, S. M. Mishra, and D. Willkomm. CORVUS: A Cognitive Radio Approach for Usage of Virtual Unlicensed Spectrum. Technical report, July 2004. [5] D. Cabric, S. M. Mishra, and R. W. Brodersen. Implementation Issues in Spectrum Sensing for Cognitive Radios. In Asilomar Conference, 2004. [6] E. Cohen and H. Kaplan. Spatially-Decaying Aggregation Over a Network: Model and Algorithms. In Proceedings of SIGMOD 2004, pages 707-718, New York, NY, USA, 2004. ACM Press. [7] P. Flajolet and G. N. Martin. Probabilistic Counting Algorithms for Data Base Applications. J. Comput. Syst. Sci., 31(2):182-209, 1985. [8] C. Gkantsidis, M. Mihail, and A. Saberi. Random Walks in Peer-to-Peer Networks. In Proceedings of INFOCOM 2004, pages 1229-1240, 2004. [9] E. Griffith. Previewing Intel``s Cognitive Radio Chip, June 2005. http://www.internetnews.com/wireless/article.php/3513721. [10] D. Kempe, A. Dobra, and J. Gehrke. Gossip-Based Computation of Aggregate Information. In FOCS 2003, page 482, Washington, DC, USA, 2003. IEEE Computer Society. [11] X. Liu and S. Shankar. Sensing-based Opportunistic Channel Access. In ACM Mobile Networks and Applications (MONET) Journal, March 2005. [12] Q. Lv, P. Cao, E. Cohen, K. Li, and S. Shenker. Search and Replication in Unstructured Peer-to-Peer Networks. In Proceedings of ICS, 2002. [13] A. Medina, A. Lakhina, I. Matta, and J. Byers. BRITE: an Approach to Universal Topology Generation. In Proceedings of MASCOTS conference, Aug. 2001. [14] S. M. Mishra, A. Sahai, and R. W. Brodersen. Cooperative Sensing among Cognitive Radios. In ICC 2006, June 2006. [15] S. Nath, P. B. Gibbons, S. Seshan, and Z. R. Anderson. Synopsis Diffusion for Robust Aggregation in Sensor Networks. In Proceedings of SenSys 2004, pages 250-262, 2004. [16] A. Sahai, N. Hoven, S. M. Mishra, and R. Tandra. Fundamental Tradeoffs in Robust Spectrum Sensing for Opportunistic Frequency Reuse. Technical Report UC Berkeley, 2006. [17] J. Zhao, H. Zheng, and G.-H. Yang. Distributed Coordination in Dynamic Spectrum Allocation Networks. In Proceedings of DySPAN 2005, Baltimore (MD), Nov. 2005. 17
GUESS: Gossiping Updates for Efficient Spectrum Sensing ABSTRACT Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum. Such radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings. We focus on the problem of sharing RF spectrum data among a collection of wireless devices. The inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging. We propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing. It (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence. We outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches. 1. INTRODUCTION There has recently been a huge surge in the growth of wireless technology, driven primarily by the availability of unlicensed spectrum. However, this has come at the cost of increased RF interference, which has caused the Federal Communications Commission (FCC) in the United States to re-evaluate its strategy on spectrum allocation. Currently, the FCC has licensed RF spectrum to a variety of public and private institutions, termed primary users. New spectrum allocation regimes implemented by the FCC use dynamic spectrum access schemes to either negotiate or opportunistically allocate RF spectrum to unlicensed secondary users Figure 1: Without cooperation, shadowed users are not able to detect the presence of the primary user. that can use it when the primary user is absent. The second type of allocation scheme is termed opportunistic spectrum sharing. The FCC has already legislated this access method for the 5 GHz band and is also considering the same for TV broadcast bands [1]. As a result, a new wave of intelligent radios, termed cognitive radios (or software defined radios), is emerging that can dynamically re-tune their radio parameters based on interactions with their surrounding environment. Under the new opportunistic allocation strategy, secondary users are obligated not to interfere with primary users (senders or receivers). This can be done by sensing the environment to detect the presence of primary users. However, local sensing is not always adequate, especially in cases where a secondary user is shadowed from a primary user, as illustrated in Figure 1. Here, coordination between secondary users is the only way for shadowed users to detect the primary. In general, cooperation improves sensing accuracy by an order of magnitude when compared to not cooperating at all [5]. To realize this vision of dynamic spectrum access, two fundamental problems must be solved: (1) Efficient and coordinated spectrum sensing and (2) Distributed spectrum allocation. In this paper, we propose strategies for coordinated spectrum sensing that are low cost, operate on timescales comparable to the agility of the RF environment, and are resilient to network failures and alterations. We defer the problem of spectrum allocation to future work. Spectrum sensing techniques for cognitive radio networks [4, 17] are broadly classified into three regimes; (1) centralized coordinated techniques, (2) decentralized coordinated techniques, and (3) decentralized uncoordinated techniques. We advocate a decentralized coordinated approach, similar in spirit to OSPF link-state routing used in the Internet. This is more effective than uncoordinated approaches because making decisions based only on local information is fallible (as shown in Figure 1). Moreover, compared to cen tralized approaches, decentralized techniques are more scalable, robust, and resistant to network failures and security attacks (e.g. jamming). Coordinating sensory data between cognitive radio devices is technically challenging because accurately assessing spectrum usage requires exchanging potentially large amounts of data with many radios at very short time scales. Data size grows rapidly due to the large number (i.e. thousands) of spectrum bands that must be scanned. This data must also be exchanged between potentially hundreds of neighboring secondary users at short time scales, to account for rapid changes in the RF environment. This paper presents GUESS, a novel approach to coordinated spectrum sensing for cognitive radio networks. Our approach is motivated by the following key observations: 1. Low-cost sensors collect approximate data: Most devices have limited sensing resolution because they are low-cost and low duty-cycle devices and thus cannot perform complex RF signal processing (e.g. matched filtering). Many are typically equipped with simple energy detectors that gather only approximate information. 2. Approximate summaries are sufficient for coordination: Approximate statistical summaries of sensed data are sufficient for correlating sensed information between radios, as relative usage information is more important than absolute usage data. Thus, exchanging exact RF information may not be necessary, and more importantly, too costly for the purposes of spectrum sensing. 3. RF spectrum changes incrementally: On most bands, RF spectrum utilization changes infrequently. More over, utilization of a specific RF band affects only that band and not the entire spectrum. Therefore, if the usage pattern of a particular band changes substantially, nodes detecting that change can initiate an update protocol to update the information for that band alone, leaving in place information already collected for other bands. This allows rapid detection of change while saving the overhead of exchanging unnecessary information. Based on these observations, GUESS makes the following contributions: 1. A novel approach that applies randomized gossiping algorithms to the problem of coordinated spectrum sensing. These algorithms are well suited to coordinated spectrum sensing due to the unique characteristics of the problem: i.e. radios are power-limited, mobile and have limited bandwidth to support spectrum sensing capabilities. 2. An application of in-network aggregation for dissemination of spectrum summaries. We argue that approximate summaries are adequate for performing accurate radio parameter tuning. 3. An extension of in-network aggregation and randomized gossiping to support incremental maintenance of spectrum summaries. Compared to standard gossiping approaches, incremental techniques can further reduce overhead and protocol execution time by requiring fewer radio resources. The rest of the paper is organized as follows. Section 2 motivates the need for a low cost and efficient approach to coordinated spectrum sensing. Section 3 discusses related work in the area, while Section 4 provides a background on in-network aggregation and randomized gossiping. Sections 5 and 6 discuss extensions and protocol details of these techniques for coordinated spectrum sensing. Section 7 presents simulation results showcasing the benefits of GUESS, and Section 8 presents a discussion and some directions for future work. 2. MOTIVATION To estimate the scale of the problem, In-stat predicts that the number of WiFi-enabled devices sold annually alone will grow to 430 million by 2009 [2]. Therefore, it would be reasonable to assume that a typical dense urban environment will contain several thousand cognitive radio devices in range of each other. As a result, distributed spectrum sensing and allocation would become both important and fundamental. Coordinated sensing among secondary radios is essential due to limited device sensing resolution and physical RF effects such as shadowing. Cabric et al. [5] illustrate the gains from cooperation and show an order of magnitude reduction in the probability of interference with the primary user when only a small fraction of secondary users cooperate. However, such coordination is non-trivial due to: (1) the limited bandwidth available for coordination, (2) the need to communicate this information on short timescales, and (3) the large amount of sensory data that needs to be exchanged. Limited Bandwidth: Due to restrictions of cost and power, most devices will likely not have dedicated hardware for supporting coordination. This implies that both data and sensory traffic will need to be time-multiplexed onto a single radio interface. Therefore, any time spent communicating sensory information takes away from the device's ability to perform its intended function. Thus, any such coordination must incur minimal network overhead. Short Timescales: Further compounding the problem is the need to immediately propagate updated RF sensory data, in order to allow devices to react to it in a timely fashion. This is especially true due to mobility, as rapid changes of the RF environment can occur due to device and obstacle movements. Here, fading and multi-path interference heavily impact sensing abilities. Signal level can drop to a deep null with just a λ / 4 movement in receiver position (3.7 cm at 2 GHz), where λ is the wavelength [14]. Coordination which does not support rapid dissemination of information will not be able to account for such RF variations. Large Sensory Data: Because cognitive radios can potentially use any part of the RF spectrum, there will be numerous channels that they need to scan. Suppose we wish to compute the average signal energy in each of 100 discretized frequency bands, and each signal can have up to 128 discrete energy levels. Exchanging complete sensory information between nodes would require 700 bits per transmission (for 100 channels, each requiring seven bits of information). Exchanging this information among even a small group of 50 devices each second would require (50 time-steps × 50 devices × 700 bits per transmission) = 1.67 Mbps of aggregate network bandwidth. Contrast this to the use of a randomized gossip protocol to disseminate such information, and the use of FM bit vectors to perform in-network aggregation. By applying gossip and FM aggregation, aggregate bandwidth requirements drop to (c · logN time-steps × 50 devices × 700 bits per transmission) = 0.40 Mbps, since 12 time-steps are needed to propagate the data (with c = 2, for illustrative purpoes'). This is explained further in Section 4. Based on these insights, we propose GUESS, a low-overhead approach which uses incremental extensions to FM aggregation and randomized gossiping for efficient coordination within a cognitive radio network. As we show in Section 7,' Convergence time is correlated with the connectivity topology of the devices, which in turn depends on the environment. Figure 2: Using FM aggregation to compute average signal level measured by a group of devices. these incremental extensions can further reduce bandwidth requirements by up to a factor of 2.4 over the standard approaches discussed above. 3. RELATED WORK Research in cognitive radio has increased rapidly [4, 17] over the years, and it is being projected as one of the leading enabling technologies for wireless networks of the future [9]. As mentioned earlier, the FCC has already identified new regimes for spectrum sharing between primary users and secondary users and a variety of systems have been proposed in the literature to support such sharing [4, 17]. Detecting the presence of a primary user is non-trivial, especially a legacy primary user that is not cognitive radio aware. Secondary users must be able to detect the primary even if they cannot properly decode its signals. This has been shown by Sahai et al. [16] to be extremely difficult even if the modulation scheme is known. Sophisticated and costly hardware, beyond a simple energy detector, is required to improve signal detection accuracy [16]. Moreover, a shadowed secondary user may not even be able to detect signals from the primary. As a result, simple local sensing approaches have not gained much momentum. This has motivated the need for cooperation among cognitive radios [16]. More recently, some researchers have proposed approaches for radio coordination. Liu et al. [11] consider a centralized access point (or base station) architecture in which sensing information is forwarded to APs for spectrum allocation purposes. APs direct mobile clients to collect such sensing information on their behalf. However, due to the need of a fixed AP infrastructure, such a centralized approach is clearly not scalable. In other work, Zhao et al. [17] propose a distributed coordination approach for spectrum sensing and allocation. Cognitive radios organize into clusters and coordination occurs within clusters. The CORVUS [4] architecture proposes a similar clustering method that can use either a centralized or decentralized approach to manage clusters. Although an improvement over purely centralized approaches, these techniques still require a setup phase to generate the clusters, which not only adds additional delay, but also requires many of the secondary users to be static or quasi-static. In contrast, GUESS does not place such restrictions on secondary users, and can even function in highly mobile environments. 4. BACKGROUND This section provides the background for our approach. We present the FM aggregation scheme that we use to generate spectrum summaries and perform in-network aggregation. We also discuss randomized gossiping techniques for disseminating aggregates in a cognitive radio network. 4.1 FM Aggregation Aggregation is the process where nodes in a distributed network combine data received from neighboring nodes with their local value to generate a combined aggregate. This aggregate is then communicated to other nodes in the network and this process repeats until the aggregate at all nodes has converged to the same value, i.e. the global aggregate. Double-counting is a well known problem in this process, where nodes may contribute more than once to the aggregate, causing inaccuracy in the final result. Intuitively, nodes can tag the aggregate value they transmit with information about which nodes have contributed to it. However, this approach is not scalable. Order and Duplicate Insensitive (ODI) techniques have been proposed in the literature [10, 15]. We adopt the ODI approach pioneered by Flajolet and Martin (FM) for the purposes of aggregation. Next we outline the FM approach; for full details, see [7]. Suppose we want to compute the number of nodes in the network, i.e. the COUNT query. To do so, each node performs a coin toss experiment as follows: toss an unbiased coin, stopping after the first "head" is seen. The node then sets the ith bit in a bit vector (initially filled with zeros), where i is the number of coin tosses it performed. The intuition is that as the number of nodes doing coin toss experiments increases, the probability of a more significant bit being set in one of the nodes' bit vectors increases. These bit vectors are then exchanged among nodes. When a node receives a bit vector, it updates its local bit vector by bitwise OR-ing it with the received vector (as shown in Figure 2 which computes AVERAGE). At the end of the aggregation process, every node, with high probability, has the same bit vector. The actual value of the count aggregate is then computed using the following formula, AGGF M = 2j − 1/0 .77351, where j represents the bit position of the least significant zero in the aggregate bit vector [7]. Although such aggregates are very compact in nature, requiring only O (logN) state space (where N is the number of nodes), they may not be very accurate as they can only approximate values to the closest power of 2, potentially causing errors of up to 50%. More accurate aggregates can be computed by maintaining multiple bit vectors at each node, as explained in [7]. This decreases the error to within O (1 / √ m), where m is the number of such bit vectors. Queries other than count can also be computed using variants of this basic counting algorithm, as discussed in [3] (and shown in Figure 2). Transmitting FM bit vectors between nodes is done using randomized gossiping, discussed next. 4.2 Gossip Protocols Gossip-based protocols operate in discrete time-steps; a time-step is the required amount of time for all transmissions in that time-step to complete. At every time-step, each node having something to send randomly selects one or more neighboring nodes and transmits its data to them. The randomized propagation of information provides fault-tolerance and resilience to network failures and outages. We emphasize that this characteristic of the protocol also allows it to operate without relying on any underlying network structure. Gossip protocols have been shown to provide exponentially fast convergence2, on the order of O (log N) [10], where N is the number of nodes (or radios). These protocols can therefore easily scale to very dense environments. Two types of gossip protocols are: • Uniform Gossip: In uniform gossip, at each time step, each node chooses a random neighbor and sends its data to it. This process repeats for O (log (N)) steps (where N is the number of nodes in the network). Uniform gossip provides exponentially fast convergence, with low network overhead [10]. • Random Walk: In random walk, only a subset of the nodes (termed designated nodes) communicate in a particular time-step. At startup, k nodes are randomly elected as designated nodes. In each time-step, each designated node sends its data to a random neighbor, which becomes designated for the subsequent timestep (much like passing a token). This process repeats until the aggregate has converged in the network. Random walk has been shown to provide similar convergence bounds as uniform gossip in problems of similar context [8, 12]. 5. INCREMENTAL PROTOCOLS 5.1 Incremental FM Aggregates One limitation of FM aggregation is that it does not support updates. Due to the probabilistic nature of FM, once bit vectors have been ORed together, information cannot simply be removed from them as each node's contribution has not been recorded. We propose the use of delete vectors, an extension of FM to support updates. We maintain a separate aggregate delete vector whose value is subtracted from the original aggregate vector's value to obtain the resulting value as follows. Here, a and b represent the bit positions of the least significant zero in the original and delete bit vectors respectively. Suppose we wish to compute the average signal level detected in a particular frequency. To compute this, we compute the SUM of all signal level measurements and divide that by the COUNT of the number of measurements. A SUM aggregate is computed similar to COUNT (explained in Section 4.1), except that each node performs s coin toss experiments, where s is the locally measured signal level. Figure 2 illustrates the sequence by which the average signal energy is computed in a particular band using FM aggregation. Now suppose that the measured signal at a node changes from s to s'. The vectors are updated as follows. • s'> s: We simply perform (s' − s) more coin toss experiments and bitwise OR the result with the original bit vector. • s' <s: We increase the value of the delete vector by performing (s − s') coin toss experiments and bitwise OR the result with the current delete vector. Using delete vectors, we can now support updates to the measured signal level. With the original implementation of FM, the aggregate would need to be discarded and a new one recomputed every time an update occurred. Thus, delete vectors provide a low overhead alternative for applications whose data changes incrementally, such as signal level measurements in a coordinated spectrum sensing environment. Next we discuss how these aggregates can be communicated between devices using incremental routing protocols. 5.2 Incremental Routing Protocol We use the following incremental variants of the routing protocols presented in Section 4.2 to support incremental updates to previously computed aggregates. Figure 3: State diagram each device passes through as updates proceed in the system • Incremental Gossip Protocol (IGP): When an update occurs, the updated node initiates the gossiping procedure. Other nodes only begin gossiping once they receive the update. Therefore, nodes receiving the update become active and continue communicating with their neighbors until the update protocol terminates, after O (log (N)) time steps. • Incremental Random Walk Protocol (IRWP): When an update (or updates) occur in the system, instead of starting random walks at k random nodes in the network, all k random walks are initiated from the updated node (s). The rest of the protocol proceeds in the same fashion as the standard random walk protocol. The allocation of walks to updates is discussed in more detail in [3], where the authors show that the number of walks has an almost negligible impact on network overhead. 6. PROTOCOL DETAILS Using incremental routing protocols to disseminate incremental FM aggregates is a natural fit for the problem of coordinated spectrum sensing. Here we outline the implementation of such techniques for a cognitive radio network. We continue with the example from Section 5.1, where we wish to perform coordination between a group of wireless devices to compute the average signal level in a particular frequency band. Using either incremental random walk or incremental gossip, each device proceeds through three phases, in order to determine the global average signal level for a particular frequency band. Figure 3 shows a state diagram of these phases. Susceptible: Each device starts in the susceptible state and becomes infectious only when its locally measured signal level changes, or if it receives an update message from a neighboring device. If a local change is observed, the device updates either the original or delete bit vector, as described in Section 5.1, and moves into the infectious state. If it receives an update message, it ORs the received original and delete bit vectors with its local bit vectors and moves into the infectious state. Note, because signal level measurements may change sporadically over time, a smoothing function, such as an exponentially weighted moving average, should be applied to these measurements. Infectious: Once a device is infectious it continues to send its up-to-date bit vectors, using either incremental random walk or incremental gossip, to neighboring nodes. Due to FM's order and duplicate insensitive (ODI) properties, simultaneously occurring updates are handled seamlessly by the protocol. Update messages contain a time stamp indicating when the update was generated, and each device maintains a lo Figure 4: Execution times of Incremental Protocols Figure 5: Network overhead of Incremental Protocols cal time stamp of when it received the most recent update. Using this information, a device moves into the recovered state once enough time has passed for the most recent update to have converged. As discussed in Section 4.2, this happens after O (log (N)) time steps. Recovered: A recovered device ceases to propagate any update information. At this point, it performs clean-up and prepares for the next infection by entering the susceptible state. Once all devices have entered the recovered state, the system will have converged, and with high probability, all devices will have the up-to-date average signal level. Due to the cumulative nature of FM, even if all devices have not converged, the next update will include all previous updates. Nevertheless, the probability that gossip fails to converge is small, and has been shown to be O (1/N) [10]. For coordinated spectrum sensing, non-incremental routing protocols can be implemented in a similar fashion. Random walk would operate by having devices periodically drop the aggregate and re-run the protocol. Each device would perform a coin toss (biased on the number of walks) to determine whether or not it is a designated node. This is different from the protocol discussed above where only updated nodes initiate random walks. Similar techniques can be used to implement standard gossip. 7. EVALUATION We now provide a preliminary evaluation of GUESS in simulation. A more detailed evaluation of this approach can be found in [3]. Here we focus on how incremental extensions to gossip protocols can lead to further improvements over standard gossiping techniques, for the problem of coordinated spectrum sensing. Simulation Setup: We implemented a custom simulator in C++. We study the improvements of our incremental gossip protocols over standard gossiping in two dimensions: execution time and network overhead. We use two topologies to represent device connectivity: a clique, to eliminate the effects of the underlying topology on protocol performance, and a BRITE-generated [13] power-law random graph (PLRG), to illustrate how our results extend to more realistic scenarios. We simulate a large deployment of 1,000 devices to analyze protocol scalability. In our simulations, we compute the average signal level in a particular band by disseminating FM bit vectors. In each run of the simulation, we induce a change in the measured signal at one or more devices. A run ends when the new average signal level has converged in the network. For each data point, we ran 100 simulations and 95% confidence intervals (error bars) are shown. Simulation Parameters: Each transmission involves sending 70 bits of information to a neighboring node. To compute the AVERAGE aggregate, four bit vectors need to be transmitted: the original SUM vector, the SUM delete vector, the original COUNT vector, and the COUNT delete vector. Non-incremental protocols do not transmit the delete vectors. Each transmission also includes a time stamp of when the update was generated. We assume nodes communicate on a common control channel at 2 Mbps. Therefore, one time-step of protocol execution corresponds to the time required for 1,000 nodes to sequentially send 70 bits at 2 Mbps. Sequential use of the control channel is a worst case for our protocols; in practice, multiple control channels could be used in parallel to reduce execution time. We also assume nodes are loosely time synchronized, the implications of which are discussed further in [3]. Finally, in order to isolate the effect of protocol operation on performance, we do not model the complexities of the wireless channel in our simulations. Incremental Protocols Reduce Execution Time: Figure 4 (a) compares the performance of incremental gossip (IGP) with uniform gossip on a clique topology. We observe that both protocols have almost identical execution times. This is expected as IGP operates in a similar fashion to uniform gossip, taking O (log (N)) time-steps to converge. Figure 4 (b) compares the execution times of incremental random walk (IRWP) and standard random walk on a clique. IRWP reduces execution time by a factor of 2.7 for a small number of measured signal changes. Although random walk and IRWP both use k random walks (in our simulations k = number of nodes), IRWP initiates walks only from updated nodes (as explained in Section 5.2), resulting in faster information convergence. These improvements carry over to a PLRG topology as well (as shown in Figure 4 (c)), where IRWP is 1.33 times faster than random walk. Incremental Protocols Reduce Network Overhead: Figure 5 (a) shows the ratio of data transmitted using uniform gossip relative to incremental gossip on a clique. For a small number of signal changes, incremental gossip incurs 2.4 times less overhead than uniform gossip. This is because in the early steps of protocol execution, only devices which detect signal changes communicate. As more signal changes are introduced into the system, gossip and incremental gossip incur approximately the same overhead. Similarly, incremental random walk (IRWP) incurs much less overhead than standard random walk. Figure 5 (b) shows a 2.7 fold reduction in overhead for small numbers of signal changes on a clique. Although each protocol uses the same number of random walks, IRWP uses fewer network resources than random walk because it takes less time to converge. This improvement also holds true on more complex PLRG topologies (as shown in Figure 5 (c)), where we observe a 33% reduction in network overhead. From these results it is clear that incremental techniques yield significant improvements over standard approaches to gossip, even on complex topologies. Because spectrum utilization is characterized by incremental changes to usage, incremental protocols are ideally suited to solve this problem in an efficient and cost effective manner. 8. DISCUSSION AND FUTURE WORK We have only just scratched the surface in addressing the problem of coordinated spectrum sensing using incremental gossiping. Next, we outline some open areas of research. Spatial Decay: Devices performing coordinated sensing are primarily interested in the spectrum usage of their local neighborhood. Therefore, we recommend the use of spatially decaying aggregates [6], which limits the impact of an update on more distant nodes. Spatially decaying aggregates work by successively reducing (by means of a decay function) the value of the update as it propagates further from its origin. One challenge with this approach is that propagation distance cannot be determined ahead of time and more importantly, exhibits spatio-temporal variations. Therefore, finding the optimal decay function is non-trivial, and an interesting subject of future work. Significance Threshold: RF spectrum bands continually experience small-scale changes which may not necessarily be significant. Deciding if a change is significant can be done using a significance threshold β, below which any observed change is not propagated by the node. Choosing an appropriate operating value for β is application dependent, and explored further in [3]. Weighted Readings: Although we argued that most devices will likely be equipped with low-cost sensing equipment, there may be situations where there are some special infrastructure nodes that have better sensing abilities than others. Weighting their measurements more heavily could be used to maintain a higher degree of accuracy. Determining how to assign such weights is an open area of research. Implementation Specifics: Finally, implementing gossip for coordinated spectrum sensing is also open. If implemented at the MAC layer, it may be feasible to piggy-back gossip messages over existing management frames (e.g. networking advertisement messages). As well, we also require the use of a control channel to disseminate sensing information. There are a variety of alternatives for implementing such a channel, some of which are outlined in [4]. The trade-offs of different approaches to implementing GUESS is a subject of future work. 9. CONCLUSION Spectrum sensing is a key requirement for dynamic spectrum allocation in cognitive radio networks. The nature of the RF environment necessitates coordination between cognitive radio devices. We propose GUESS, an approximate yet low overhead approach to perform efficient coordination between cognitive radios. The fundamental contributions of GUESS are: (1) an FM aggregation scheme for efficient innetwork aggregation, (2) a randomized gossiping approach which provides exponentially fast convergence and robustness to network alterations, and (3) incremental variations of FM and gossip which we show can reduce the communication time by up to a factor of 2.7 and reduce network overhead by up to a factor of 2.4. Our preliminary simulation results showcase the benefits of this approach and we also outline a set of open problems that make this a new and exciting area of research.
GUESS: Gossiping Updates for Efficient Spectrum Sensing ABSTRACT Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum. Such radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings. We focus on the problem of sharing RF spectrum data among a collection of wireless devices. The inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging. We propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing. It (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence. We outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches. 1. INTRODUCTION There has recently been a huge surge in the growth of wireless technology, driven primarily by the availability of unlicensed spectrum. However, this has come at the cost of increased RF interference, which has caused the Federal Communications Commission (FCC) in the United States to re-evaluate its strategy on spectrum allocation. Currently, the FCC has licensed RF spectrum to a variety of public and private institutions, termed primary users. New spectrum allocation regimes implemented by the FCC use dynamic spectrum access schemes to either negotiate or opportunistically allocate RF spectrum to unlicensed secondary users Figure 1: Without cooperation, shadowed users are not able to detect the presence of the primary user. that can use it when the primary user is absent. The second type of allocation scheme is termed opportunistic spectrum sharing. The FCC has already legislated this access method for the 5 GHz band and is also considering the same for TV broadcast bands [1]. As a result, a new wave of intelligent radios, termed cognitive radios (or software defined radios), is emerging that can dynamically re-tune their radio parameters based on interactions with their surrounding environment. Under the new opportunistic allocation strategy, secondary users are obligated not to interfere with primary users (senders or receivers). This can be done by sensing the environment to detect the presence of primary users. However, local sensing is not always adequate, especially in cases where a secondary user is shadowed from a primary user, as illustrated in Figure 1. Here, coordination between secondary users is the only way for shadowed users to detect the primary. In general, cooperation improves sensing accuracy by an order of magnitude when compared to not cooperating at all [5]. To realize this vision of dynamic spectrum access, two fundamental problems must be solved: (1) Efficient and coordinated spectrum sensing and (2) Distributed spectrum allocation. In this paper, we propose strategies for coordinated spectrum sensing that are low cost, operate on timescales comparable to the agility of the RF environment, and are resilient to network failures and alterations. We defer the problem of spectrum allocation to future work. Spectrum sensing techniques for cognitive radio networks [4, 17] are broadly classified into three regimes; (1) centralized coordinated techniques, (2) decentralized coordinated techniques, and (3) decentralized uncoordinated techniques. We advocate a decentralized coordinated approach, similar in spirit to OSPF link-state routing used in the Internet. This is more effective than uncoordinated approaches because making decisions based only on local information is fallible (as shown in Figure 1). Moreover, compared to cen tralized approaches, decentralized techniques are more scalable, robust, and resistant to network failures and security attacks (e.g. jamming). Coordinating sensory data between cognitive radio devices is technically challenging because accurately assessing spectrum usage requires exchanging potentially large amounts of data with many radios at very short time scales. Data size grows rapidly due to the large number (i.e. thousands) of spectrum bands that must be scanned. This data must also be exchanged between potentially hundreds of neighboring secondary users at short time scales, to account for rapid changes in the RF environment. This paper presents GUESS, a novel approach to coordinated spectrum sensing for cognitive radio networks. Our approach is motivated by the following key observations: 1. Low-cost sensors collect approximate data: Most devices have limited sensing resolution because they are low-cost and low duty-cycle devices and thus cannot perform complex RF signal processing (e.g. matched filtering). Many are typically equipped with simple energy detectors that gather only approximate information. 2. Approximate summaries are sufficient for coordination: Approximate statistical summaries of sensed data are sufficient for correlating sensed information between radios, as relative usage information is more important than absolute usage data. Thus, exchanging exact RF information may not be necessary, and more importantly, too costly for the purposes of spectrum sensing. 3. RF spectrum changes incrementally: On most bands, RF spectrum utilization changes infrequently. More over, utilization of a specific RF band affects only that band and not the entire spectrum. Therefore, if the usage pattern of a particular band changes substantially, nodes detecting that change can initiate an update protocol to update the information for that band alone, leaving in place information already collected for other bands. This allows rapid detection of change while saving the overhead of exchanging unnecessary information. Based on these observations, GUESS makes the following contributions: 1. A novel approach that applies randomized gossiping algorithms to the problem of coordinated spectrum sensing. These algorithms are well suited to coordinated spectrum sensing due to the unique characteristics of the problem: i.e. radios are power-limited, mobile and have limited bandwidth to support spectrum sensing capabilities. 2. An application of in-network aggregation for dissemination of spectrum summaries. We argue that approximate summaries are adequate for performing accurate radio parameter tuning. 3. An extension of in-network aggregation and randomized gossiping to support incremental maintenance of spectrum summaries. Compared to standard gossiping approaches, incremental techniques can further reduce overhead and protocol execution time by requiring fewer radio resources. The rest of the paper is organized as follows. Section 2 motivates the need for a low cost and efficient approach to coordinated spectrum sensing. Section 3 discusses related work in the area, while Section 4 provides a background on in-network aggregation and randomized gossiping. Sections 5 and 6 discuss extensions and protocol details of these techniques for coordinated spectrum sensing. Section 7 presents simulation results showcasing the benefits of GUESS, and Section 8 presents a discussion and some directions for future work. 2. MOTIVATION To estimate the scale of the problem, In-stat predicts that the number of WiFi-enabled devices sold annually alone will grow to 430 million by 2009 [2]. Therefore, it would be reasonable to assume that a typical dense urban environment will contain several thousand cognitive radio devices in range of each other. As a result, distributed spectrum sensing and allocation would become both important and fundamental. Coordinated sensing among secondary radios is essential due to limited device sensing resolution and physical RF effects such as shadowing. Cabric et al. [5] illustrate the gains from cooperation and show an order of magnitude reduction in the probability of interference with the primary user when only a small fraction of secondary users cooperate. However, such coordination is non-trivial due to: (1) the limited bandwidth available for coordination, (2) the need to communicate this information on short timescales, and (3) the large amount of sensory data that needs to be exchanged. Limited Bandwidth: Due to restrictions of cost and power, most devices will likely not have dedicated hardware for supporting coordination. This implies that both data and sensory traffic will need to be time-multiplexed onto a single radio interface. Therefore, any time spent communicating sensory information takes away from the device's ability to perform its intended function. Thus, any such coordination must incur minimal network overhead. Short Timescales: Further compounding the problem is the need to immediately propagate updated RF sensory data, in order to allow devices to react to it in a timely fashion. This is especially true due to mobility, as rapid changes of the RF environment can occur due to device and obstacle movements. Here, fading and multi-path interference heavily impact sensing abilities. Signal level can drop to a deep null with just a λ / 4 movement in receiver position (3.7 cm at 2 GHz), where λ is the wavelength [14]. Coordination which does not support rapid dissemination of information will not be able to account for such RF variations. Large Sensory Data: Because cognitive radios can potentially use any part of the RF spectrum, there will be numerous channels that they need to scan. Suppose we wish to compute the average signal energy in each of 100 discretized frequency bands, and each signal can have up to 128 discrete energy levels. Exchanging complete sensory information between nodes would require 700 bits per transmission (for 100 channels, each requiring seven bits of information). Exchanging this information among even a small group of 50 devices each second would require (50 time-steps × 50 devices × 700 bits per transmission) = 1.67 Mbps of aggregate network bandwidth. Contrast this to the use of a randomized gossip protocol to disseminate such information, and the use of FM bit vectors to perform in-network aggregation. By applying gossip and FM aggregation, aggregate bandwidth requirements drop to (c · logN time-steps × 50 devices × 700 bits per transmission) = 0.40 Mbps, since 12 time-steps are needed to propagate the data (with c = 2, for illustrative purpoes'). This is explained further in Section 4. Based on these insights, we propose GUESS, a low-overhead approach which uses incremental extensions to FM aggregation and randomized gossiping for efficient coordination within a cognitive radio network. As we show in Section 7,' Convergence time is correlated with the connectivity topology of the devices, which in turn depends on the environment. Figure 2: Using FM aggregation to compute average signal level measured by a group of devices. these incremental extensions can further reduce bandwidth requirements by up to a factor of 2.4 over the standard approaches discussed above. 3. RELATED WORK Research in cognitive radio has increased rapidly [4, 17] over the years, and it is being projected as one of the leading enabling technologies for wireless networks of the future [9]. As mentioned earlier, the FCC has already identified new regimes for spectrum sharing between primary users and secondary users and a variety of systems have been proposed in the literature to support such sharing [4, 17]. Detecting the presence of a primary user is non-trivial, especially a legacy primary user that is not cognitive radio aware. Secondary users must be able to detect the primary even if they cannot properly decode its signals. This has been shown by Sahai et al. [16] to be extremely difficult even if the modulation scheme is known. Sophisticated and costly hardware, beyond a simple energy detector, is required to improve signal detection accuracy [16]. Moreover, a shadowed secondary user may not even be able to detect signals from the primary. As a result, simple local sensing approaches have not gained much momentum. This has motivated the need for cooperation among cognitive radios [16]. More recently, some researchers have proposed approaches for radio coordination. Liu et al. [11] consider a centralized access point (or base station) architecture in which sensing information is forwarded to APs for spectrum allocation purposes. APs direct mobile clients to collect such sensing information on their behalf. However, due to the need of a fixed AP infrastructure, such a centralized approach is clearly not scalable. In other work, Zhao et al. [17] propose a distributed coordination approach for spectrum sensing and allocation. Cognitive radios organize into clusters and coordination occurs within clusters. The CORVUS [4] architecture proposes a similar clustering method that can use either a centralized or decentralized approach to manage clusters. Although an improvement over purely centralized approaches, these techniques still require a setup phase to generate the clusters, which not only adds additional delay, but also requires many of the secondary users to be static or quasi-static. In contrast, GUESS does not place such restrictions on secondary users, and can even function in highly mobile environments. 4. BACKGROUND This section provides the background for our approach. We present the FM aggregation scheme that we use to generate spectrum summaries and perform in-network aggregation. We also discuss randomized gossiping techniques for disseminating aggregates in a cognitive radio network. 4.1 FM Aggregation Aggregation is the process where nodes in a distributed network combine data received from neighboring nodes with their local value to generate a combined aggregate. This aggregate is then communicated to other nodes in the network and this process repeats until the aggregate at all nodes has converged to the same value, i.e. the global aggregate. Double-counting is a well known problem in this process, where nodes may contribute more than once to the aggregate, causing inaccuracy in the final result. Intuitively, nodes can tag the aggregate value they transmit with information about which nodes have contributed to it. However, this approach is not scalable. Order and Duplicate Insensitive (ODI) techniques have been proposed in the literature [10, 15]. We adopt the ODI approach pioneered by Flajolet and Martin (FM) for the purposes of aggregation. Next we outline the FM approach; for full details, see [7]. Suppose we want to compute the number of nodes in the network, i.e. the COUNT query. To do so, each node performs a coin toss experiment as follows: toss an unbiased coin, stopping after the first "head" is seen. The node then sets the ith bit in a bit vector (initially filled with zeros), where i is the number of coin tosses it performed. The intuition is that as the number of nodes doing coin toss experiments increases, the probability of a more significant bit being set in one of the nodes' bit vectors increases. These bit vectors are then exchanged among nodes. When a node receives a bit vector, it updates its local bit vector by bitwise OR-ing it with the received vector (as shown in Figure 2 which computes AVERAGE). At the end of the aggregation process, every node, with high probability, has the same bit vector. The actual value of the count aggregate is then computed using the following formula, AGGF M = 2j − 1/0 .77351, where j represents the bit position of the least significant zero in the aggregate bit vector [7]. Although such aggregates are very compact in nature, requiring only O (logN) state space (where N is the number of nodes), they may not be very accurate as they can only approximate values to the closest power of 2, potentially causing errors of up to 50%. More accurate aggregates can be computed by maintaining multiple bit vectors at each node, as explained in [7]. This decreases the error to within O (1 / √ m), where m is the number of such bit vectors. Queries other than count can also be computed using variants of this basic counting algorithm, as discussed in [3] (and shown in Figure 2). Transmitting FM bit vectors between nodes is done using randomized gossiping, discussed next. 4.2 Gossip Protocols Gossip-based protocols operate in discrete time-steps; a time-step is the required amount of time for all transmissions in that time-step to complete. At every time-step, each node having something to send randomly selects one or more neighboring nodes and transmits its data to them. The randomized propagation of information provides fault-tolerance and resilience to network failures and outages. We emphasize that this characteristic of the protocol also allows it to operate without relying on any underlying network structure. Gossip protocols have been shown to provide exponentially fast convergence2, on the order of O (log N) [10], where N is the number of nodes (or radios). These protocols can therefore easily scale to very dense environments. Two types of gossip protocols are: • Uniform Gossip: In uniform gossip, at each time step, each node chooses a random neighbor and sends its data to it. This process repeats for O (log (N)) steps (where N is the number of nodes in the network). Uniform gossip provides exponentially fast convergence, with low network overhead [10]. • Random Walk: In random walk, only a subset of the nodes (termed designated nodes) communicate in a particular time-step. At startup, k nodes are randomly elected as designated nodes. In each time-step, each designated node sends its data to a random neighbor, which becomes designated for the subsequent timestep (much like passing a token). This process repeats until the aggregate has converged in the network. Random walk has been shown to provide similar convergence bounds as uniform gossip in problems of similar context [8, 12]. 5. INCREMENTAL PROTOCOLS 5.1 Incremental FM Aggregates 5.2 Incremental Routing Protocol 6. PROTOCOL DETAILS 7. EVALUATION 9. CONCLUSION Spectrum sensing is a key requirement for dynamic spectrum allocation in cognitive radio networks. The nature of the RF environment necessitates coordination between cognitive radio devices. We propose GUESS, an approximate yet low overhead approach to perform efficient coordination between cognitive radios. The fundamental contributions of GUESS are: (1) an FM aggregation scheme for efficient innetwork aggregation, (2) a randomized gossiping approach which provides exponentially fast convergence and robustness to network alterations, and (3) incremental variations of FM and gossip which we show can reduce the communication time by up to a factor of 2.7 and reduce network overhead by up to a factor of 2.4. Our preliminary simulation results showcase the benefits of this approach and we also outline a set of open problems that make this a new and exciting area of research.
GUESS: Gossiping Updates for Efficient Spectrum Sensing ABSTRACT Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum. Such radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings. We focus on the problem of sharing RF spectrum data among a collection of wireless devices. The inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging. We propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing. It (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence. We outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches. 1. INTRODUCTION There has recently been a huge surge in the growth of wireless technology, driven primarily by the availability of unlicensed spectrum. Currently, the FCC has licensed RF spectrum to a variety of public and private institutions, termed primary users. New spectrum allocation regimes implemented by the FCC use dynamic spectrum access schemes to either negotiate or opportunistically allocate RF spectrum to unlicensed secondary users Figure 1: Without cooperation, shadowed users are not able to detect the presence of the primary user. that can use it when the primary user is absent. The second type of allocation scheme is termed opportunistic spectrum sharing. Under the new opportunistic allocation strategy, secondary users are obligated not to interfere with primary users (senders or receivers). This can be done by sensing the environment to detect the presence of primary users. However, local sensing is not always adequate, especially in cases where a secondary user is shadowed from a primary user, as illustrated in Figure 1. Here, coordination between secondary users is the only way for shadowed users to detect the primary. In general, cooperation improves sensing accuracy by an order of magnitude when compared to not cooperating at all [5]. To realize this vision of dynamic spectrum access, two fundamental problems must be solved: (1) Efficient and coordinated spectrum sensing and (2) Distributed spectrum allocation. In this paper, we propose strategies for coordinated spectrum sensing that are low cost, operate on timescales comparable to the agility of the RF environment, and are resilient to network failures and alterations. We defer the problem of spectrum allocation to future work. Spectrum sensing techniques for cognitive radio networks [4, 17] are broadly classified into three regimes; (1) centralized coordinated techniques, (2) decentralized coordinated techniques, and (3) decentralized uncoordinated techniques. We advocate a decentralized coordinated approach, similar in spirit to OSPF link-state routing used in the Internet. This is more effective than uncoordinated approaches because making decisions based only on local information is fallible (as shown in Figure 1). Moreover, compared to cen tralized approaches, decentralized techniques are more scalable, robust, and resistant to network failures and security attacks (e.g. jamming). Coordinating sensory data between cognitive radio devices is technically challenging because accurately assessing spectrum usage requires exchanging potentially large amounts of data with many radios at very short time scales. Data size grows rapidly due to the large number (i.e. thousands) of spectrum bands that must be scanned. This data must also be exchanged between potentially hundreds of neighboring secondary users at short time scales, to account for rapid changes in the RF environment. This paper presents GUESS, a novel approach to coordinated spectrum sensing for cognitive radio networks. Our approach is motivated by the following key observations: 1. Many are typically equipped with simple energy detectors that gather only approximate information. 2. Approximate summaries are sufficient for coordination: Approximate statistical summaries of sensed data are sufficient for correlating sensed information between radios, as relative usage information is more important than absolute usage data. Thus, exchanging exact RF information may not be necessary, and more importantly, too costly for the purposes of spectrum sensing. 3. RF spectrum changes incrementally: On most bands, RF spectrum utilization changes infrequently. More over, utilization of a specific RF band affects only that band and not the entire spectrum. This allows rapid detection of change while saving the overhead of exchanging unnecessary information. Based on these observations, GUESS makes the following contributions: 1. A novel approach that applies randomized gossiping algorithms to the problem of coordinated spectrum sensing. These algorithms are well suited to coordinated spectrum sensing due to the unique characteristics of the problem: i.e. radios are power-limited, mobile and have limited bandwidth to support spectrum sensing capabilities. 2. An application of in-network aggregation for dissemination of spectrum summaries. We argue that approximate summaries are adequate for performing accurate radio parameter tuning. 3. An extension of in-network aggregation and randomized gossiping to support incremental maintenance of spectrum summaries. Compared to standard gossiping approaches, incremental techniques can further reduce overhead and protocol execution time by requiring fewer radio resources. The rest of the paper is organized as follows. Section 2 motivates the need for a low cost and efficient approach to coordinated spectrum sensing. Section 3 discusses related work in the area, while Section 4 provides a background on in-network aggregation and randomized gossiping. Sections 5 and 6 discuss extensions and protocol details of these techniques for coordinated spectrum sensing. Section 7 presents simulation results showcasing the benefits of GUESS, and Section 8 presents a discussion and some directions for future work. 2. MOTIVATION Therefore, it would be reasonable to assume that a typical dense urban environment will contain several thousand cognitive radio devices in range of each other. As a result, distributed spectrum sensing and allocation would become both important and fundamental. Coordinated sensing among secondary radios is essential due to limited device sensing resolution and physical RF effects such as shadowing. Limited Bandwidth: Due to restrictions of cost and power, most devices will likely not have dedicated hardware for supporting coordination. This implies that both data and sensory traffic will need to be time-multiplexed onto a single radio interface. Therefore, any time spent communicating sensory information takes away from the device's ability to perform its intended function. Thus, any such coordination must incur minimal network overhead. Short Timescales: Further compounding the problem is the need to immediately propagate updated RF sensory data, in order to allow devices to react to it in a timely fashion. This is especially true due to mobility, as rapid changes of the RF environment can occur due to device and obstacle movements. Here, fading and multi-path interference heavily impact sensing abilities. Coordination which does not support rapid dissemination of information will not be able to account for such RF variations. Large Sensory Data: Because cognitive radios can potentially use any part of the RF spectrum, there will be numerous channels that they need to scan. Exchanging complete sensory information between nodes would require 700 bits per transmission (for 100 channels, each requiring seven bits of information). Exchanging this information among even a small group of 50 devices each second would require (50 time-steps × 50 devices × 700 bits per transmission) = 1.67 Mbps of aggregate network bandwidth. Contrast this to the use of a randomized gossip protocol to disseminate such information, and the use of FM bit vectors to perform in-network aggregation. This is explained further in Section 4. Based on these insights, we propose GUESS, a low-overhead approach which uses incremental extensions to FM aggregation and randomized gossiping for efficient coordination within a cognitive radio network. As we show in Section 7,' Convergence time is correlated with the connectivity topology of the devices, which in turn depends on the environment. Figure 2: Using FM aggregation to compute average signal level measured by a group of devices. these incremental extensions can further reduce bandwidth requirements by up to a factor of 2.4 over the standard approaches discussed above. 3. RELATED WORK As mentioned earlier, the FCC has already identified new regimes for spectrum sharing between primary users and secondary users and a variety of systems have been proposed in the literature to support such sharing [4, 17]. Detecting the presence of a primary user is non-trivial, especially a legacy primary user that is not cognitive radio aware. Secondary users must be able to detect the primary even if they cannot properly decode its signals. Moreover, a shadowed secondary user may not even be able to detect signals from the primary. As a result, simple local sensing approaches have not gained much momentum. This has motivated the need for cooperation among cognitive radios [16]. More recently, some researchers have proposed approaches for radio coordination. Liu et al. [11] consider a centralized access point (or base station) architecture in which sensing information is forwarded to APs for spectrum allocation purposes. APs direct mobile clients to collect such sensing information on their behalf. However, due to the need of a fixed AP infrastructure, such a centralized approach is clearly not scalable. In other work, Zhao et al. [17] propose a distributed coordination approach for spectrum sensing and allocation. Cognitive radios organize into clusters and coordination occurs within clusters. The CORVUS [4] architecture proposes a similar clustering method that can use either a centralized or decentralized approach to manage clusters. In contrast, GUESS does not place such restrictions on secondary users, and can even function in highly mobile environments. 4. BACKGROUND This section provides the background for our approach. We present the FM aggregation scheme that we use to generate spectrum summaries and perform in-network aggregation. We also discuss randomized gossiping techniques for disseminating aggregates in a cognitive radio network. 4.1 FM Aggregation Aggregation is the process where nodes in a distributed network combine data received from neighboring nodes with their local value to generate a combined aggregate. This aggregate is then communicated to other nodes in the network and this process repeats until the aggregate at all nodes has converged to the same value, i.e. the global aggregate. Double-counting is a well known problem in this process, where nodes may contribute more than once to the aggregate, causing inaccuracy in the final result. Intuitively, nodes can tag the aggregate value they transmit with information about which nodes have contributed to it. However, this approach is not scalable. We adopt the ODI approach pioneered by Flajolet and Martin (FM) for the purposes of aggregation. Next we outline the FM approach; for full details, see [7]. Suppose we want to compute the number of nodes in the network, i.e. the COUNT query. To do so, each node performs a coin toss experiment as follows: toss an unbiased coin, stopping after the first "head" is seen. The node then sets the ith bit in a bit vector (initially filled with zeros), where i is the number of coin tosses it performed. The intuition is that as the number of nodes doing coin toss experiments increases, the probability of a more significant bit being set in one of the nodes' bit vectors increases. These bit vectors are then exchanged among nodes. When a node receives a bit vector, it updates its local bit vector by bitwise OR-ing it with the received vector (as shown in Figure 2 which computes AVERAGE). At the end of the aggregation process, every node, with high probability, has the same bit vector. More accurate aggregates can be computed by maintaining multiple bit vectors at each node, as explained in [7]. Transmitting FM bit vectors between nodes is done using randomized gossiping, discussed next. 4.2 Gossip Protocols Gossip-based protocols operate in discrete time-steps; a time-step is the required amount of time for all transmissions in that time-step to complete. At every time-step, each node having something to send randomly selects one or more neighboring nodes and transmits its data to them. The randomized propagation of information provides fault-tolerance and resilience to network failures and outages. We emphasize that this characteristic of the protocol also allows it to operate without relying on any underlying network structure. Gossip protocols have been shown to provide exponentially fast convergence2, on the order of O (log N) [10], where N is the number of nodes (or radios). These protocols can therefore easily scale to very dense environments. Two types of gossip protocols are: • Uniform Gossip: In uniform gossip, at each time step, each node chooses a random neighbor and sends its data to it. This process repeats for O (log (N)) steps (where N is the number of nodes in the network). Uniform gossip provides exponentially fast convergence, with low network overhead [10]. • Random Walk: In random walk, only a subset of the nodes (termed designated nodes) communicate in a particular time-step. At startup, k nodes are randomly elected as designated nodes. In each time-step, each designated node sends its data to a random neighbor, which becomes designated for the subsequent timestep (much like passing a token). This process repeats until the aggregate has converged in the network. Random walk has been shown to provide similar convergence bounds as uniform gossip in problems of similar context [8, 12]. 9. CONCLUSION Spectrum sensing is a key requirement for dynamic spectrum allocation in cognitive radio networks. The nature of the RF environment necessitates coordination between cognitive radio devices. We propose GUESS, an approximate yet low overhead approach to perform efficient coordination between cognitive radios. Our preliminary simulation results showcase the benefits of this approach and we also outline a set of open problems that make this a new and exciting area of research.
J-56
Robust Solutions for Combinatorial Auctions
Bids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature. In reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so. Given a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible. We have called this the Bid-taker's Exposure Problem. When faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue. In this paper, we propose an approach to addressing the Bidtaker's Exposure Problem. Firstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution. A weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes. Secondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker. We then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions. We also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders. Robust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner.
[ "robust", "combinatori auction", "bid", "enforc commit", "bid withdraw", "exposur problem", "weight super solut", "weight super solut", "constraint program", "constraint program", "mutual bid bond", "bid-taker's exposur problem", "set partit problem", "winner determin problem", "mandatori mutual bid bond" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "M", "M" ]
Robust Solutions for Combinatorial Auctions ∗ Alan Holland Cork Constraint Computation Centre Department of Computer Science University College Cork, Ireland a.holland@4c.ucc.ie Barry O``Sullivan Cork Constraint Computation Centre Department of Computer Science University College Cork, Ireland b.osullivan@4c.ucc.ie ABSTRACT Bids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature. In reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so. Given a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible. We have called this the Bid-taker``s Exposure Problem. When faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue. In this paper, we propose an approach to addressing the Bidtaker``s Exposure Problem. Firstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution. A weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes. Secondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker. We then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions. We also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders. Robust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner. Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computers ∗This work has received support from Science Foundation Ireland under grant number 00/PI.1/C075. The authors wish to thank Brahim Hnich and the anonymous reviewers for their helpful comments. and Society]: Electronic Commerce; I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search. General Terms Algorithms, Economics, Reliability. 1. INTRODUCTION A combinatorial auction (CA) [5] provides an efficient means of allocating multiple distinguishable items amongst bidders whose perceived valuations for combinations of items differ. Such auctions are gaining in popularity and there is a proliferation in their usage across various industries such as telecoms, B2B procurement and transportation [11, 19]. Revenue is the most obvious optimization criterion for such auctions, but another desirable attribute is solution robustness. In terms of combinatorial auctions, a robust solution is one that can withstand bid withdrawal (a break) by making changes easily to form a repair solution of adequate revenue. A brittle solution to a CA is one in which an unacceptable loss in revenue is unavoidable if a winning bid is withdrawn. In such situations the bid-taker may be left with a set of items deemed to be of low value by all other bidders. These bidders may associate a higher value for these items if they were combined with items already awarded to others, hence the bid-taker is left in an undesirable local optimum in which a form of backtracking is required to reallocate the items in a manner that results in sufficient revenue. We have called this the Bid-taker``s Exposure Problem that bears similarities to the Exposure Problem faced by bidders seeking multiple items in separate single-unit auctions but holding little or no value for a subset of those items. However, reallocating items may be regarded as disruptive to a solution in many real-life scenarios. Consider a scenario where procurement for a business is conducted using a CA. It would be highly undesirable to retract contracts from a group of suppliers because of the failure of a third party. A robust solution that is tolerant of such breaks is preferable. Robustness may be regarded as a preventative measure protecting against future uncertainty by sacrificing revenue in place of solution stability and reparability. We assume a probabilistic approach whereby the bid-taker has knowledge of the reliability of bidders from which the likelihood of an incomplete transaction may be inferred. Repair solutions are required for bids that are seen as brittle (i.e. likely to break). Repairs may also be required for sets of bids deemed brittle. We propose the use of the Weighted Super 183 Solutions (WSS) framework [13] for constraint programming, that is ideal for establishing such robust solutions. As we shall see, this framework can enforce constraints on solutions so that possible breakages are reparable. This paper is organized as follows. Section 2 presents the Winner Determination Problem (WDP) for combinatorial auctions, outlines some possible reasons for bid withdrawal and shows how simply maximizing expected revenue can lead to intolerable revenue losses for risk-averse bid-takers. This motivates the use of robust solutions and Section 3 introduces a constraint programming (CP) framework, Weighted Super Solutions [13], that finds such solutions. We then propose an auction model in Section 4 that enhances reparability by introducing mandatory mutual bid bonds, that may be seen as a form of leveled commitment contract [26, 27]. Section 5 presents an extensive empirical evaluation of the approach presented in this paper, in the context of a number of well-known combinatorial auction distributions, with very encouraging results. Section 6 discusses possible extensions and questions raised by our research that deserve future work. Finally, in Section 7 a number of concluding remarks are made. 2. COMBINATORIAL AUCTIONS Before presenting the technical details of our solution to the Bid-taker``s Exposure Problem, we shall present a brief survey of combinatorial auctions and existing techniques for handling bid withdrawal. Combinatorial auctions involve a single bid-taker allocating multiple distinguishable items amongst a group of bidders. The bidtaker has a set of m items for sale, M = {1, 2, ... , m}, and bidders submit a set of bids, B = {B1, B2, ... , Bn}. A bid is a tuple Bj = Sj, pj where Sj ⊆ M is a subset of the items for sale and pj ≥ 0 is a price. The WDP for a CA is to label all bids as either winning or losing so as to maximize the revenue from winning bids without allocating any item to more than one bid. The following is the integer programming formulation for the WDP: max n j=1 pjxj s.t. j|i∈Sj xj ≤ 1, ∀i ∈ {1 ... m}, xj ∈ {0, 1}. This problem is NP-complete [23] and inapproximable [25], and is otherwise known as the Set Packing Problem. The above problem formulation assumes the notion of free disposal. This means that the optimal solution need not necessarily sell all of the items. If the auction rules stipulate that all items must be sold, the problem becomes a Set Partition Problem [5]. The WDP has been extensively studied in recent years. The fastest search algorithms that find optimal solutions (e.g. CABOB [25]) can, in practice, solve very large problems involving thousands of bids very quickly. 2.1 The Problem of Bid Withdrawal We assume an auction protocol with a three stage process involving the submission of bids, winner determination, and finally a transaction phase. We are interested in bid withdrawals that occur between the announcement of winning bids and the end of the transaction phase. All bids are valid until the transaction is complete, so we anticipate an expedient transaction process1 . 1 In some instances the transaction period may be so lengthy that consideration of non-winning bids as still being valid may not be fair. Breaks that occur during a lengthy transaction phase are more difficult to remedy and may require a subsequent auction. For example, if the item is a service contract for a given period of time and the break occurs after partial fulfilment of this contract, the other An example of a winning bid withdrawal occurred in an FCC spectrum auction [32]. Withdrawals, or breaks, may occur for various reasons. Bid withdrawal may be instigated by the bid-taker when Quality of Service agreements are broken or payment deadlines are not met. We refer to bid withdrawal by the bid-taker as item withdrawal in this paper to distinguish between the actions of a bidder and the bid-taker. Harstad and Rothkopf [8] outlined several possibilities for breaks in single item auctions that include: 1. an erroneous initial valuation/bid; 2. unexpected events outside the winning bidder``s control; 3. a desire to have the second-best bid honored; 4. information obtained or events that occurred after the auction but before the transaction that reduces the value of an item; 5. the revelation of competing bidders'' valuations infers reduced profitability, a problem known as the Winner``s Curse. Kastner et al. [15] examined how to handle perturbations given a solution whilst minimizing necessary changes to that solution. These perturbations may include bid withdrawals, change of valuation/items of a bid or the submission of a new bid. They looked at the problem of finding incremental solutions to restructure a supply chain whose formation is determined using combinatorial auctions [30]. Following a perturbation in the optimal solution they proceed to impose involuntary item withdrawals from winning bidders. They formulated an incremental integer linear program (ILP) that sought to maximize the valuation of the repair solution whilst preserving the previous solution as much as possible. 2.2 Being Proactive against Bid Withdrawal When a bid is withdrawn there may be constraints on how the solution can be repaired. If the bid-taker was freely able to revoke the awarding of items to other bidders then the solution could be repaired easily by reassigning all the items to the optimal solution without the withdrawn bid. Alternatively, the bidder who reneged upon a bid may have all his other bids disqualified and the items could be reassigned based on the optimum solution without that bidder present. However, the bid-taker is often unable to freely reassign the items already awarded to other bidders. When items cannot be withdrawn from winning bidders, following the failure of another bidder to honor his bid, repair solutions are restricted to the set of bids whose items only include those in the bid(s) that were reneged upon. We are free to award items to any of the previously unsuccessful bids when finding a repair solution. When faced with uncertainty over the reliability of bidders a possible approach is to maximize expected revenue. This approach does not make allowances for risk-averse bid-takers who may view a small possibility of very low revenue as unacceptable. Consider the example in Table 1, and the optimal expected revenue in the situation where a single bid may be withdrawn. There are three submitted bids for items A and B, the third being a combination bid for the pair of items at a value of 190. The optimal solution has a value of 200, with the first and second bids as winners. When we consider the probabilities of failure, in the fourth column, the problem of which solution to choose becomes more difficult. Computing the expected revenue for the solution with the first and second bids winning the items, denoted 1, 1, 0 , gives: (200×0.9×0.9)+(2×100×0.9×0.1)+(190×0.1×0.1) = 181.90. bidders'' valuations for the item may have decreased in a non-linear fashion. 184 Table 1: Example Combinatorial Auction. Items Bids A B AB Withdrawal prob x1 100 0 0 0.1 x2 0 100 0 0.1 x3 0 0 190 0.1 If a single bid is withdrawn there is probability of 0.18 of a revenue of 100, given the fact that we cannot withdraw an item from the other winning bidder. The expected revenue for 0, 0, 1 is: (190 × 0.9) + (200 × 0.1) = 191.00. We can therefore surmise that the second solution is preferable to the first based on expected revenue. Determining the maximum expected revenue in the presence of such uncertainty becomes computationally infeasible however, as the number of brittle bids grows. A WDP needs to be solved for all possible combinations of bids that may fail. The possible loss in revenue for breaks is also not tightly bounded using this approach, therefore a large loss may be possible for a small number of breaks. Consider the previous example where the bid amount for x3 becomes 175. The expected revenue of 1, 1, 0 (181.75) becomes greater than that of 0, 0, 1 (177.50). There are some bid-takers who may prefer the latter solution because the revenue is never less than 175, but the former solution returns revenue of only 100 with probability 0.18. A risk-averse bid-taker may not tolerate such a possibility, preferring to sacrifice revenue for reduced risk. If we modify our repair search so that a solution of at least a given revenue is guaranteed, the search for a repair solution becomes a satisfiability test rather than an optimization problem. The approaches described above are in contrast to that which we propose in the next section. Our approach can be seen as preventative in that we find an initial allocation of items to bidders which is robust to bid withdrawal. Possible losses in revenue are bounded by a fixed percentage of the true optimal allocation. Perturbations to the original solution are also limited so as to minimize disruption. We regard this as the ideal approach for real-world combinatorial auctions. DEFINITION 1 (ROBUST SOLUTION FOR A CA). A robust solution for a combinatorial auction is one where any subset of successful bids whose probability of withdrawal is greater than or equal to α can be repaired by reassigning items at a cost of at most β to other previously losing bids to form a repair solution. Constraints on acceptable revenue, e.g. being a minimum percentage of the optimum, are defined in the problem model and are thus satisfied by all solutions. The maximum cost of repair, β, may be a fixed value that may be thought of as a fund for compensating winning bidders whose items are withdrawn from them when creating a repair solution. Alternatively, β may be a function of the bids that were withdrawn. Section 4 will give an example of such a mechanism. In the following section we describe an ideal constraint-based framework for the establishment of such robust solutions. 3. FINDING ROBUST SOLUTIONS In constraint programming [4] (CP), a constraint satisfaction problem (CSP) is modeled as a set of n variables X = {x1, ... , xn}, a set of domains D = {D(x1), ... , D(xn)}, where D(xi) is the set of finite possible values for variable xi and a set C = {C1, ... , Cm} of constraints, each restricting the assignments of some subset of the variables in X. Constraint satisfaction involves finding values for each of the problem variables such that all constraints are satisfied. Its main advantages are its declarative nature and flexibility in tackling problems with arbitrary side constraints. Constraint optimization seeks to find a solution to a CSP that optimizes some objective function. A common technique for solving constraint optimization problems is to use branch-and-bound techniques that avoid exploring sub-trees that are known not to contain a better solution than the best found so far. An initial bound can be determined by finding a solution that satisfies all constraints in C or by using some heuristic methods. A classical super solution (SS) is a solution to a CSP in which, if a small number of variables lose their values, repair solutions are guaranteed with only a few changes, thus providing solution robustness [9, 10]. It is a generalization of both fault tolerance in CP [31] and supermodels in propositional satisfiability (SAT) [7]. An (a,b)-super solution is one in which if at most a variables lose their values, a repair solution can be found by changing at most b other variables [10]. Super solutions for combinatorial auctions minimize the number of bids whose status needs to be changed when forming a repair solution [12]. Only a particular set of variables in the solution may be subject to change and these are said to be members of the breakset. For each combination of brittle assignments in the break-set, a repair-set is required that comprises the set of variables whose values must change to provide another solution. The cardinality of the repair set is used to measure the cost of repair. In reality, changing some variable assignments in a repair solution incurs a lower cost than others thereby motivating the use of a different metric for determining the legality of repair sets. The Weighted Super Solution (WSS) framework [13] considers the cost of repair required, rather than simply the number of assignments modified, to form an alternative solution. For CAs this may be a measure of the compensation penalties paid to winning bidders to break existing agreements. Robust solutions are particularly desirable for applications where unreliability is a problem and potential breakages may incur severe penalties. Weighted super solutions offer a means of expressing which variables are easily re-assigned and those that incur a heavy cost [13]. Hebrard et al. [9] describe how some variables may fail (such as machines in a job-shop problem) and others may not. A WSS generalizes this approach so that there is a probability of failure associated with each assignment and sets of variables whose assignments have probabilities of failure greater than or equal to a threshold value, α, require repair solutions. A WSS measures the cost of repairing, or reassigning, other variables using inertia as a metric. Inertia is a measure of a variable``s aversion to change and depends on its current assignment, future assignment and the breakage variable(s). It may be desirable to reassign items to different bidders in order to find a repair solution of satisfactory revenue. Compensation may have to be paid to bidders who lose items during the formation of a repair solution. The inertia of a bid reflects the cost of changing its state. For winning bids this may reflect the necessary compensation penalty for the bid-taker to break the agreement (if such breaches are permitted), whereas for previously losing bids this is a free operation. The total amount of compensation payable to bidders may depend upon other factors, such as the cause of the break. There is a limit to how much these overall repair costs should be, and this is given by the value β. This value may not be known in advance and 185 Algorithm 1: WSS(int level, double α, double β):Boolean begin if level > number of variables then return true choose unassigned variable x foreach value v in the domain of x do assign x : v if problem is consistent then foreach combination of brittle assignments, A do if ¬reparable(A, β) then return false; if WSS(level+1) then return true unassign x return false end may depend upon the break. Therefore, β may be viewed as the fund used to compensate winning bidders for the unilateral withdrawal of their bids by the bid-taker. In summary, an (α,β)-WSS allows any set of variables whose probability of breaking is greater than or equal to α be repaired with changes to the original robust solution with a cost of at most β. The depth-first search for a WSS (see pseudo-code description in Algorithm 1) maintains arc-consistency [24] at each node of the tree. As search progresses, the reparability of each previous assignment is verified at each node by extending a partial repair solution to the same depth as the current partial solution. This may be thought of as maintaining concurrent search trees for repairs. A repair solution is provided for every possible set of break variables, A. The WSS algorithm attempts to extend the current partial assignment by choosing a variable and assigning it a value. Backtracking may then occur for one of two reasons: we cannot extend the assignment to satisfy the given constraints, or the current partial assignment cannot be associated with a repair solution whose cost of repair is less than β should a break occur. The procedure reparable searches for partial repair solutions using backtracking and attempts to extend the last repair found, just as in (1,b)super solutions [9]; the differences being that a repair is provided for a set of breakage variables rather than a single variable and the cost of repair is considered. A summation operator is used to determine the overall cost of repair. If a fixed bound upon the size of any potential break-set can be formed, the WSS algorithm is NPcomplete. For a more detailed description of the WSS search algorithm, the reader is referred to [13], since a complete description of the algorithm is beyond the scope of this paper. EXAMPLE 1. We shall step through the example given in Table 1 when searching for a WSS. Each bid is represented by a single variable with domain values of 0 and 1, the former representing bid-failure and the latter bid-success. The probability of failure of the variables are 0.1 when they are assigned to 1 and 0.0 otherwise. The problem is initially solved using an ILP solver such as lp_solve [3] or CPLEX, and the optimal revenue is found to be 200. A fixed percentage of this revenue can be used as a threshold value for a robust solution and its repairs. The bid-taker wishes to have a robust solution so that if a single winning bid is withdrawn, a repair solution can be formed without withdrawing items from any other winning bidder. This example may be seen as searching for a (0.1,0)-weighted super solution, β is 0 because no funds are available to compensate the withdrawal of items from winning bidders. The bid-taker is willing to compromise on revenue, but only by 5%, say, of the optimal value. Bids 1 and 3 cannot both succeed, since they both require item A, so a constraint is added precluding the assignment in which both variables take the value 1. Similarly, bids 2 and 3 cannot both win so another constraint is added between these two variables. Therefore, in this example the set of CSP variables is V = {x1, x2, x3}, whose domains are all {0, 1}. The constraints are x1 + x3 ≤ 1, x2 + x3 ≤ 1 and xi∈V aixi ≥ 190, where ai reflects the relevant bid-amounts for the respective bid variables. In order to find a robust solution of optimal revenue we seek to maximize the sum of these amounts, max xi∈V aixi. When all variables are set to 0 (see Figure 1(a) branch 3), this is not a solution because the minimum revenue of 190 has not been met, so we try assigning bid3 to 1 (branch 4). This is a valid solution but this variable is brittle because there is a 10% chance that this bid may be withdrawn (see Table 1). Therefore we need to determine if a repair can be formed should it break. The search for a repair begins at the first node, see Figure 1(b). Notice that value 1 has been removed from bid3 because this search tree is simulating the withdrawal of this bid. When bid1 is set to 0 (branch 4.1), the maximum revenue solution in the remaining subtree has revenue of only 100, therefore search is discontinued at that node of the tree. Bid1 and bid2 are both assigned to 1 (branches 4.2 and 4.4) and the total cost of both these changes is still 0 because no compensation needs to be paid for bids that change from losing to winning. With bid3 now losing (branch 4.5), this gives a repair solution of 200. Hence 0, 0, 1 is reparable and therefore a WSS. We continue our search in Figure 1(a) however, because we are seeking a robust solution of optimal revenue. When bid1 is assigned to 1 (branch 6) we seek a partial repair for this variable breaking (branch 5 is not considered since it offers insufficient revenue). The repair search sets bid1 to 0 in a separate search tree, (not shown), and control is returned to the search for a WSS. Bid2 is set to 0 (branch 7), but this solution would not produce sufficient revenue so bid2 is then set to 1 (branch 8). We then attempt to extend the repair for bid1 (not shown). This fails because the repair for bid1 cannot assign bid2 to 0 because the cost of repairing such an assignment would be ∞, given that the auction rules do not permit the withdrawal of items from winning bids. A repair for bid1 breaking is therefore not possible because items have already been awarded to bid2. A repair solution with bid2 assigned to 1 does not produce sufficient revenue when bid1 is assigned to 0. The inability to withdraw items from winning bids implies that 1, 1, 0 is an irreparable solution when the minimum tolerable revenue is greater than 100. The italicized comments and dashed line in Figure 1(a) illustrate the search path for a WSS if both of these bids were deemed reparable. Section 4 introduces an alternative auction model that will allow the bid-taker to receive compensation for breakages and in turn use this payment to compensate other bidders for withdrawal of items from winning bids. This will enable the reallocation of items and permit the establishment of 1, 1, 0 as a second WSS for this example. 4. MUTUAL BID BONDS: A BACKTRACKING MECHANISM Some auction solutions are inherently brittle and it may be impossible to find a robust solution. If we can alter the rules of an auction so that the bid-taker can retract items from winning bidders, then the reparability of solutions to such auctions may be improved. In this section we propose an auction model that permits bid and item withdrawal by the bidders and bid-taker, respectively. We propose a model that incorporates mutual bid bonds to enable solution reparability for the bid-taker, a form of insurance against 186 0 0 0 0 0 0 0 1 1 1 1 1 1 1 Insufficient revenue Find repair solution for bid 3 breakage Find partial repair for bid 1 breakage Insufficient revenue (a) Extend partial repair for bid 1 breakage (b) Find partial repair for bid 2 breakage Bid 1 Bid 2 Bid 3 Find repair solutions for bid 1 & 2 breakages [0] [190] [100] [100] [200] 1 2 3 4 5 6 7 8 9 Insufficient revenue (a) Search for WSS. 0 0 0 0 0 0 0 1 1 1 1 1 1 1 Insufficient revenue Insufficient revenue Bid 1 Bid 2 Bid 3 inertia=0 inertia=0 inertia=0 4.1 4.2 4.3 4.4 4.5 (b) Search for a repair for bid 3 breakage. Figure 1: Search Tree for a WSS without item withdrawal. the winner``s curse for the bidder whilst also compensating bidders in the case of item withdrawal from winning bids. We propose that such Winner``s Curse & Bid-taker``s Exposure insurance comprise a fixed percentage, κ, of the bid amount for all bids. Such mutual bid bonds are mandatory for each bid in our model2 . The conditions attached to the bid bonds are that the bid-taker be allowed to annul winning bids (item withdrawal) when repairing breaks elsewhere in the solution. In the interests of fairness, compensation is paid to bidders from whom items are withdrawn and is equivalent to the penalty that would have been imposed on the bidder should he have withdrawn the bid. Combinatorial auctions impose a heavy computational burden on the bidder so it is important that the hedging of risk should be a simple and transparent operation for the bidder so as not to further increase this burden unnecessarily. We also contend that it is imperative that the bidder knows the potential penalty for withdrawal in advance of bid submission. This information is essential for bidders when determining how aggressive they should be in their bidding strategy. Bid bonds are commonplace in procurement for construction projects. Usually they are mandatory for all bids, are a fixed percentage, κ, of the bid amount and are unidirectional in that item withdrawal by the bid-taker is not permitted. Mutual bid bonds may be seen as a form of leveled commitment contract in which both parties may break the contract for the same fixed penalty. Such contracts permit unilateral decommitment for prespecified penalties. Sandholm et al. showed that this can increase the expected payoffs of all parties and enables deals that would be impossible under full commitment [26, 28, 29]. In practice a bid bond typically ranges between 5 and 20% of the 2 Making the insurance optional may be beneficial in some instances. If a bidder does not agree to the insurance, it may be inferred that he may have accurately determined the valuation for the items and therefore less likely to fall victim to the winner``s curse. The probability of such a bid being withdrawn may be less, so a repair solution may be deemed unnecessary for this bid. On the other hand it decreases the reparability of solutions. bid amount [14, 18]. If the decommitment penalties are the same for both parties in all bids, κ does not influence the reparability of a given set of bids. It merely influences the levels of penalties and compensation transacted by agents. Low values of κ incur low bid withdrawal penalties and simulate a dictatorial bid-taker who does not adequately compensate bidders for item withdrawal. Andersson and Sandholm [1] found that myopic agents reach a higher social welfare quicker if they act selfishly rather than cooperatively when penalties in leveled commitment contracts are low. Increased levels of bid withdrawal are likely when the penalties are low also. High values of κ tend towards full-commitment and reduce the advantages of such Winner``s Curse & Bid-taker``s Exposure insurance. The penalties paid are used to fund a reassignment of items to form a repair solution of sufficient revenue by compensating previously successful bidders for withdrawal of the items from them. EXAMPLE 2. Consider the example given in Table 1 once more, where the bids also comprise a mutual bid bond of 5% of the bid amount. If a bid is withdrawn, the bidder forfeits this amount and the bid-taker can then compensate winning bidders whose items are withdrawn when trying to form a repair solution later. The search for repair solutions for breaks to bid1 and bid2 appear in Figures 2(a) and 2(b), respectively3 . When bid1 breaks, there is a compensation penalty paid to the bid-taker equal to 5 that can be used to fund a reassignment of the items. We therefore set β to 5 and this becomes the maximum expenditure allowed to withdraw items from winning bidders. β may also be viewed as the size of the fund available to facilitate backtracking by the bid-taker. When we extend the partial repair for bid1 so that bid2 loses an item (branch 8.1), the overall cost of repair increases to 5, due to this item withdrawal by the bid-taker, 3 The actual implementation of WSS search checks previous solutions to see if they can repair breaks before searching for a new repair solution. 0, 0, 1 is a solution that has already been found so the search for a repair in this example is not strictly necessary but is described for pedagogical reasons. 187 0 0 0 1 1 Bid 1 Bid 2 Bid 3 Insufficient revenue inertia=5 =5 inertia=0 =5 inertia=5 =5 1 6.1 8.1 9.1 9.2 (a) Search for a repair for bid 1 breakage. 0 0 0 1 1 Bid 1 Bid 2 Bid 3 Insufficient revenue inertia=10 =10 inertia=10 =10 inertia=10 =10 1 8.2 8.3 9.3 9.4 (b) Search for a repair for bid 2 breakage. Figure 2: Repair Search Tree for breaks 1 and 2, κ = 0.05. and is just within the limit given by β. In Figure 1(a) the search path follows the dashed line and sets bid3 to be 0 (branch 9). The repair solutions for bids 1 and 2 can be extended further by assigning bid3 to 1 (branches 9.2 and 9.4). Therefore, 1, 1, 0 may be considered a robust solution. Recall, that previously this was not the case. Using mutual bid bonds thus increases reparability and allows a robust solution of revenue 200 as opposed to 190, as was previously the case. 5. EXPERIMENTS We have used the Combinatorial Auction Test Suite (CATS) [16] to generate sample auction data. We generated 100 instances of problems in which there are 20 items for sale and 100-2000 bids that may be dominated in some instances4 . Such dominated bids can participate in repair solutions although they do not feature in optimal solutions. CATS uses economically motivated bidding patterns to generate auction data in various scenarios. To motivate the research presented in this paper we use sensitivity analysis to examine the brittleness of optimal solutions and hence determine the types of auctions most likely to benefit from a robust solution. We then establish robust solutions for CAs using the WSS framework. 5.1 Sensitivity Analysis for the WDP We have performed sensitivity analysis of the following four distributions: airport take-off/landing slots (matching), electronic components (arbitrary), property/spectrum-rights (regions) and transportation (paths). These distributions were chosen because they describe a broad array of bidding patterns in different application domains. The method used is as follows. We first of all determined the optimal solution using lp_solve, a mixed integer linear program solver [3]. We then simulated a single bid withdrawal and re-solved the problem with the other winning bids remaining fixed, i.e. there were no involuntary dropouts. The optimal repair solution was then determined. This process is repeated for all winning bids in the overall optimal solution, thus assuming that all bids are brittle. Figure 3 shows the average revenue of such repair solutions as a percentage of the optimum. Also shown is the average worst-case scenario over 100 auctions. We also implemented an auction rule that disallows bids from the reneging bidder participate in a repair5 . Figure 3(a) illustrates how the paths distribution is inherently the most robust distribution since when any winning bid is withdrawn the solution can be repaired to achieve over 98.5% of the 4 The CATS flags included int prices with the bid alpha parameter set to 1000. 5 We assumed that all bids in a given XOR bid with the same dummy item were from the same bidder. optimal revenue on average for auctions with more than 250 bids. There are some cases however when such withdrawals result in solutions whose revenue is significantly lower than optimum. Even in auctions with as many as 2000 bids there are occasions when a single bid withdrawal can result in a drop in revenue of over 5%, although the average worst-case drop in revenue is only 1%. Figure 3(b) shows how the matching distribution is more brittle on average than paths and also has an inferior worst-case revenue on average. This trend continues as the regions-npv (Figure 3(c)) and arbitrary-npv (Figure 3(d)) distributions are more brittle still. These distributions are clearly sensitive to bid withdrawal when no other winning bids in the solution may be involuntarily withdrawn by the bid-taker. 5.2 Robust Solutions using WSS In this section we focus upon both the arbitrary-npv and regions-npv distributions because the sensitivity analysis indicated that these types of auctions produce optimal solutions that tend to be most brittle, and therefore stand to benefit most from solution robustness. We ignore the auctions with 2000 bids because the sensitivity analysis has indicated that these auctions are inherently robust with a very low average drop in revenue following a bid withdrawal. They would also be very computationally expensive, given the extra complexity of finding robust solutions. A pure CP approach needs to be augmented with global constraints that incorporate operations research techniques to increase pruning sufficiently so that thousands of bids may be examined. Global constraints exploit special-purpose filtering algorithms to improve performance [21]. There are a number of ways to speed up the search for a weighted super solution in a CA, although this is not the main focus of our current work. Polynomial matching algorithms may be used in auctions whose bid length is short, such as those for airport landing/take-off slots for example. The integer programming formulation of the WDP stipulates that a bid either loses or wins. If we relax this constraint so that bids can partially win, this corresponds to the linear relaxation of the problem and is solvable in polynomial time. At each node of the search tree we can quickly solve the linear relaxation of the remaining problem in the subtree below the current node to establish an upper bound on remaining revenue. If this upper bound plus revenue in the parent tree is less than the current lower bound on revenue, search at that node can cease. The (continuous) LP relaxation thus provides a vital speed-up in the search for weighted super solutions, which we have exploited in our implementation. The LP formulation is as follows: max xi∈V aixi 188 100 95 90 85 80 75 250 500 750 1000 1250 1500 1750 2000 Revenue(%ofoptimum) Bids Average Repair Solution Revenue Worst-case Repair Solution Revenue (a) paths 100 95 90 85 80 75 250 500 750 1000 1250 1500 1750 2000 Revenue(%ofoptimum) Bids Average Repair Solution Revenue Worst-case Repair Solution Revenue (b) matching 100 95 90 85 80 75 250 500 750 1000 1250 1500 1750 2000 Revenue(%ofoptimum) Bids Average Repair Solution Revenue Worst-case Repair Solution Revenue (c) regions-npv 100 95 90 85 80 75 250 500 750 1000 1250 1500 1750 2000 Revenue(%ofoptimum) Bids Average Repair Solution Revenue Worst-case Repair Solution Revenue (d) arbitrary-npv Figure 3: Sensitivity of bid distributions to single bid withdrawal. s.t. j|i∈Sj xj ≤ 1, ∀i ∈ {1 ... m}, xj ≥ 0, xj ∈ R. Additional techniques, that are outlined in [25], can aid the scalability of a CP approach but our main aim in these experiments is to examine the robustness of various auction distributions and consider the tradeoff between robustness and revenue. The WSS solver we have developed is an extension of the super solution solver presented in [9, 10]. This solver is, in turn, based upon the EFC constraint solver [2]. Combinatorial auctions are easily modeled as a constraint optimization problems. We have chosen the branch-on-bids formulation because in tests it worked faster than a branch-on-items formulation for the arbitrary-npv and regions-npv distributions. All variables are binary and our search mechanism uses a reverse lexicographic value ordering heuristic. This complements our dynamic variable ordering heuristic that selects the most promising unassigned variable as the next one in the search tree. We use the product of the solution of the LP relaxation and the degree of a variable to determine the likelihood of its participation in a robust solution. High values in the LP solution are a strong indication of variables most likely to form a high revenue solution whilst the a variable``s degree reflects the number of other bids that overlap in terms of desired items. Bids for large numbers of items tend to be more robust, which is why we weight our robust solution search in this manner. We found this heuristic to be slightly more effective than the LP solution alone. As the number of bids in the auction increases however, there is an increase in the inherent robustness of solutions so the degree of a variable loses significance as the auction size increases. 5.3 Results Our experiments simulate three different constraints on repair solutions. The first is that no winning bids are withdrawn by the bid-taker and a repair solution must return a revenue of at least 90% of the optimal overall solution. Secondly, we relaxed the revenue constraint to 85% of optimum. Thirdly, we allowed backtracking by the bid-taker on winning bids using mutual bid bonds but maintaining the revenue constraint at 90% of optimum. Prior to finding a robust solution we solved the WDP optimally using lp_solve [3]. We then set the minimum tolerable revenue for a solution to be 90% (then 85%) of the revenue of this optimal solution. We assumed that all bids were brittle, thus a repair solution is required for every bid in the solution. Initially we assume that no backtracking was permitted on assignments of items to other winning bids given a bid withdrawal elsewhere in the solution. Table 2 shows the percentage of optimal solutions that are robust for minimum revenue constraints for repair solutions of 90% and 85% of optimal revenue. Relaxing the revenue constraint on repair solutions to 85% of the optimum revenue greatly increases the number of optimal solutions that are robust. We also conducted experiments on the same auctions in which backtracking by the bid-taker is permitted using mutual bid bonds. This significantly improves the reparability of optimal solutions whilst still maintaining repair solutions of 90% of optimum. An interesting feature of the arbitrary-npv distribution is that optimal solutions can become more brittle as the number of bids increases. The reason for this is that optimal solutions for larger auctions have more winning bids. Some of the optimal solutions for the smallest auctions with 100 bids have only one winning bidder. If this bid is withdrawn it is usually easy to find a new repair solution within 90% of the previous optimal revenue. Also, repair solutions for bids that contain a small number of items may be made difficult by the fact that a reduced number of bids cover only a subset of those items. A mitigating factor is that such bids form a smaller percentage of the revenue of the optimal solution on average. We also implemented a rule stipulating that any losing bids from 189 Table 2: Optimal Solutions that are Inherently Robust (%). #Bids Min Revenue 100 250 500 1000 2000 arbitrary-npv repair ≥ 90% 21 5 3 37 93 repair ≥ 85% 26 15 40 87 100 MBB & repair ≥ 90% 41 35 60 94 ≥ 93 regions-npv repair ≥ 90% 30 33 61 91 98 repair ≥ 85% 50 71 95 100 100 MBB & repair ≥ 90% 60 78 96 99 ≥ 98 Table 3: Occurrence of Robust Solutions (%). #Bids Min Revenue 100 250 500 1000 arbitrary-npv repair ≥ 90% 58 39 51 98 repair ≥ 85% 86 88 94 99 MBB & repair ≥ 90% 78 86 98 100 regions-npv repair ≥ 90% 61 70 97 100 repair ≥ 85% 89 99 99 100 MBB & repair ≥ 90% 83 96 100 100 a withdrawing bidder cannot participate in a repair solution. This acts as a disincentive for strategic withdrawal and was also used previously in the sensitivity analysis. In some auctions, a robust solution may not exist. Table 3 shows the percentage of auctions that support robust solutions for the arbitrary-npv and regions -npv distributions. It is clear that finding robust solutions for the former distribution is particularly difficult for auctions with 250 and 500 bids when revenue constraints are 90% of optimum. This difficulty was previously alluded to by the low percentage of optimal solutions that were robust for these auctions. Relaxing the revenue constraint helps increase the percentage of auctions in which robust solutions are achievable to 88% and 94%, respectively. This improves the reparability of all solutions thereby increasing the average revenue of the optimal robust solution. It is somewhat counterintuitive to expect a reduction in reparability of auction solutions as the number of bids increases because there tends to be an increased number of solutions above a revenue threshold in larger auctions. The MBB auction model performs very well however, and ensures that robust solutions are achievable for such inherently brittle auctions without sacrificing over 10% of optimal revenue to achieve repair solutions. Figure 4 shows the average revenue of the optimal robust solution as a percentage of the overall optimum. Repair solutions found for a WSS provide a lower bound on possible revenue following a bid withdrawal. Note that in some instances it is possible for a repair solution to have higher revenue than the original solution. When backtracking on winning bids by the bid-taker is disallowed, this can only happen when the repair solution includes two or more bids that were not in the original. Otherwise the repair bids would participate in the optimal robust solution in place of the bid that was withdrawn. A WSS guarantees minimum levels of revenue for repair solutions but this is not to say that repair solutions cannot be improved upon. It is possible to use an incremental algorithm to 100 98 96 94 92 250 500 750 1000 1250 1500 1750 2000 Revenue(%ofoptimum) Bids Repair Revenue: Min 90% Optimal Repair Revenue: Min 85% Optimal MBB: Repair Revenue: Min 90% Optimal (a) regions-npv 100 98 96 94 92 250 500 750 1000 1250 1500 1750 2000 Revenue(%ofoptimum) Bids Repair Revenue: Min 90% Optimal Repair Revenue: Min 85% Optimal MBB: Repair Revenue: Min 90% Optimal (b) arbitrary-npv Figure 4: Revenue of optimal robust solutions. determine an optimal repair solution following a break, whilst safe in the knowledge that in advance of any possible bid withdrawal we can establish a lower bound on the revenue of a repair. Kastner et al. have provided such an incremental ILP formulation [15]. Mutual bid bonds facilitate backtracking by the bid-taker on already assigned items. This improves the reparability of all possible solutions thus increasing the revenue of the optimal robust solution on average. Figure 4 shows the increase in revenue of robust solutions in such instances. The revenues of repair solutions are bounded by at least 90% of the optimum in our experiments thereby allowing a direct comparison with robust solutions already found using the same revenue constraint but not providing for backtracking. It is immediately obvious that such a mechanism can significantly increase revenue whilst still maintaining solution robustness. Table 4 shows the number of winning bids participating in optimal and optimal robust solutions given the three different constraints on repairing solutions listed at the beginning of this section. As the number of bids increases, more of the optimal overall solutions are robust. This leads to a convergence in the number of winning bids. The numbers in brackets are derived from the sensitivity analysis of optimal solutions that reveals the fact that almost all optimal solutions for auctions of 2000 bids are robust. We can therefore infer that the average number of winning bids in revenuemaximizing robust solutions converges towards that of the optimal overall solutions. A notable side-effect of robust solutions is that fewer bids participate in the solutions. It can be clearly seen from Table 4 that when revenue constraints on repair solutions are tight, there are fewer winning bids in the optimal robust solution on average. This is particularly pronounced for smaller auctions in both distributions. This can win benefits for the bid-taker such as reduced overheads in dealing with fewer suppliers. Although MBBs aid solution repara190 Table 4: Number of winning bids. #Bids Solution 100 250 500 1000 2000 arbitrary-npv Optimal 3.31 5.60 7.17 9.31 10.63 Repair ≥ 90% 1.40 2.18 6.10 9.03 (≈ 10.63) Repair ≥ 85% 1.65 3.81 6.78 9.31 (10.63) MBB (≥ 90%) 2.33 5.49 7.33 9.34 (≈ 10.63) regions-npv Optimal 4.34 7.05 9.10 10.67 12.76 Repair ≥ 90% 3.03 5.76 8.67 10.63 (≈ 12.76) Repair ≥ 85% 3.45 6.75 9.07 (10.67) (12.76) MBB (≥ 90%) 3.90 6.86 9.10 10.68 (≈ 12.76) bility, the number of bids in the solutions increases on average. This is to be expected because a greater fraction of these solutions are in fact optimal, as we saw in Table 2. 6. DISCUSSION AND FUTURE WORK Bidding strategies can become complex in non-incentive-compatible mechanisms where winner determination is no longer necessarily optimal. The perceived reparability of a bid may influence the bid amount, with reparable bids reaching a lower equilibrium point and perceived irreparable bids being more aggressive. Penalty payments for bid withdrawal also create an incentive for more aggressive bidding by providing a form of insurance against the winner``s curse [8]. If a winning bidder``s revised valuation for a set of items drops by more than the penalty for withdrawal of the bid, then it is in his best interests to forfeit the item(s) and pay the penalty. Should the auction rules state that the bid-taker will refuse to sell the items to any of the remaining bidders in the event of a withdrawal, then insurance against potential losses will stimulate more aggressive bidding. However, in our case we are seeking to repair the solution with the given bids. A side-effect of such a policy is to offset the increased aggressiveness by incentivizing reduced valuations in expectation that another bidder``s successful bid is withdrawn. Harstad and Rothkopf [8] examined the conditions required to ensure an equilibrium position in which bidding was at least as aggressive as if no bid withdrawal was permitted, given this countervailing incentive to under-estimate a valuation. Three major results arose from their study of bid withdrawal in a single item auction: 1. Equilibrium bidding is more aggressive with withdrawal for sufficiently small probabilities of an award to the second highest bidder in the event of a bid withdrawal; 2. Equilibrium bidding is more aggressive with withdrawal if the number of bidders is large enough; 3. For many distributions of costs and estimates, equilibrium bidding is more aggressive with withdrawal if the variability of the estimating distribution is sufficiently large. It is important that mutual bid bonds do not result in depressed bidding in equilibrium. An analysis of the resultant behavior of bidders must incorporate the possibility of a bidder winning an item and having it withdrawn in order for the bid-taker to formulate a repair solution after a break elsewhere. Harstad and Rothkopf have analyzed bidder aggressiveness [8] using a strictly game-theoretic model in which the only reason for bid withdrawal is the winner``s curse. They assumed all bidders were risk-neutral, but surmised that it is entirely possible for the bid-taker to collect a risk premium from risk-averse bidders with the offer of such insurance. Combinatorial auctions with mutual bid bonds add an extra incentive to bid aggressively because of the possibility of being compensated for having a winning bid withdrawn by a bid-taker. This is militated against by the increased probability of not having items withdrawn in a repair solution. We leave an in-depth analysis of the sufficient conditions for more aggressive bidding for future work. Whilst the WSS framework provides ample flexibility and expressiveness, scalability becomes a problem for larger auctions. Although solutions to larger auctions tend to be naturally more robust, some bid-takers in such auctions may require robustness. A possible extension of our work in this paper may be to examine the feasibility of reformulating integer linear programs so that the solutions are robust. Hebrard et al. [10] examined reformulation of CSPs for finding super solutions. Alternatively, it may be possible to use a top-down approach by looking at the k-best solutions sequentially, in terms of revenue, and performing sensitivity analysis upon each solution until a robust one is found. In procurement settings the principle of free disposal is often discounted and all items must be sold. This reduces the number of potential solutions and thereby reduces the reparability of each solution. The impact of such a constraint on revenue of robust solutions is also left for future work. There is another interesting direction this work may take, namely robust mechanism design. Porter et al. introduced the notion of fault tolerant mechanism design in which agents have private information regarding costs for task completion, but also their probabilities of failure [20]. When the bid-taker has combinatorial valuations for task completions it may be desirable to assign the same task to multiple agents to ensure solution robustness. It is desirable to minimize such potentially redundant task assignments but not to the detriment of completed task valuations. This problem could be modeled using the WSS framework in a similar manner to that of combinatorial auctions. In the case where no robust solutions are found, it is possible to optimize robustness, instead of revenue, by finding a solution of at least a given revenue that minimizes the probability of an irreparable break. In this manner the least brittle solution of adequate revenue may be chosen. 7. CONCLUSION Fairness is often cited as a reason for choosing the optimal solution in terms of revenue only [22]. Robust solutions militate against bids deemed brittle, therefore bidders must earn a reputation for being reliable to relax the reparability constraint attached to their bids. This may be seen as being fair to long-standing business partners whose reliability is unquestioned. Internet-based auctions are often seen as unwelcome price-gouging exercises by suppliers in many sectors [6, 17]. Traditional business partnerships are being severed by increased competition amongst suppliers. Quality of Service can suffer because of the increased focus on short-term profitability to the detriment of the bid-taker in the long-term. Robust solutions can provide a means of selectively discriminating against distrusted bidders in a measured manner. As combinatorial auction deployment moves from large value auctions with a small pool of trusted bidders (e.g. spectrum-rights sales) towards lower value auctions with potentially unknown bidders (e.g. Supply Chain Management [30]), solution robustness becomes more relevant. As well as being used to ensure that the bid-taker is not left vulnerable to bid withdrawal, it may also be used to cement relationships with preferred, possibly incumbent, suppliers. 191 We have shown that it is possible to attain robust solutions for CAs with only a small loss in revenue. We have also illustrated how such solutions tend to have fewer winning bids than overall optimal solutions, thereby reducing any overheads associated with dealing with more bidders. We have also demonstrated that introducing mutual bid bonds, a form of leveled commitment contract, can significantly increase the revenue of optimal robust solutions by improving reparability. We contend that robust solutions using such a mechanism can allow a bid-taker to offer the possibility of bid withdrawal to bidders whilst remaining confident about postrepair revenue and also facilitating increased bidder aggressiveness. 8. REFERENCES [1] Martin Andersson and Tuomas Sandholm. Leveled commitment contracts with myopic and strategic agents. Journal of Economic Dynamics and Control, 25:615-640, 2001. Special issue on Agent-Based Computational Economics. [2] Fahiem Bacchus and George Katsirelos. EFC solver. www.cs.toronto.edu/˜gkatsi/efc/efc.html. [3] Michael Berkelaar, Kjell Eikland, and Peter Notebaert. lp solve version 5.0.10.0. http://groups.yahoo.com/group/lp_solve/. [4] Rina Dechter. Constraint Processing. Morgan Kaufmann, 2003. [5] Sven DeVries and Rakesh Vohra. Combinatorial auctions: A survey. INFORMS Journal on Computing, pages 284-309, 2003. [6] Jim Ericson. Reverse auctions: Bad idea. Line 56, Sept 2001. [7] Matthew L. Ginsberg, Andrew J. Parkes, and Amitabha Roy. Supermodels and Robustness. In Proceedings of AAAI-98, pages 334-339, Madison, WI, 1998. [8] Ronald M. Harstad and Michael H. Rothkopf. Withdrawable bids as winner``s curse insurance. Operations Research, 43(6):982-994, November-December 1995. [9] Emmanuel Hebrard, Brahim Hnich, and Toby Walsh. Robust solutions for constraint satisfaction and optimization. In Proceedings of the European Conference on Artificial Intelligence, pages 186-190, 2004. [10] Emmanuel Hebrard, Brahim Hnich, and Toby Walsh. Super solutions in constraint programming. In Proceedings of CP-AI-OR 2004, pages 157-172, 2004. [11] Gail Hohner, John Rich, Ed Ng, Grant Reid, Andrew J. Davenport, Jayant R. Kalagnanam, Ho Soo Lee, and Chae An. Combinatorial and quantity-discount procurement auctions benefit Mars Incorporated and its suppliers. Interfaces, 33(1):23-35, 2003. [12] Alan Holland and Barry O``Sullivan. Super solutions for combinatorial auctions. In Ercim-Colognet Constraints Workshop (CSCLP 04). Springer LNAI, Lausanne, Switzerland, 2004. [13] Alan Holland and Barry O``Sullivan. Weighted super solutions for constraint programs, December 2004. Technical Report: No. UCC-CS-2004-12-02. [14] Selective Insurance. Business insurance. http://www.selectiveinsurance.com/psApps /Business/Ins/bonds. asp?bc=13.16.127. [15] Ryan Kastner, Christina Hsieh, Miodrag Potkonjak, and Majid Sarrafzadeh. On the sensitivity of incremental algorithms for combinatorial auctions. In WECWIS, pages 81-88, June 2002. [16] Kevin Leyton-Brown, Mark Pearson, and Yoav Shoham. Towards a universal test suite for combinatorial auction algorithms. In ACM Conference on Electronic Commerce, pages 66-76, 2000. [17] Associated General Contractors of America. Associated general contractors of America white paper on reverse auctions for procurement of construction. http://www.agc.org/content/public/pdf /Member_Resources/ ReverseAuctionWhitePaper.pdf, 2003. [18] National Society of Professional Engineers. A basic guide to surety bonds. http://www.nspe.org/pracdiv /76-02surebond. asp. [19] Martin Pesendorfer and Estelle Cantillon. Combination bidding in multi-unit auctions. Harvard Business School Working Draft, 2003. [20] Ryan Porter, Amir Ronen, Yoav Shoham, and Moshe Tennenholtz. Mechanism design with execution uncertainty. In Proceedings of UAI-02, pages 414-421, 2002. [21] Jean-Charles R´egin. Global constraints and filtering algorithms. In Constraint and Integer ProgrammingTowards a Unified Methodology, chapter 4, pages 89-129. Kluwer Academic Publishers, 2004. [22] Michael H. Rothkopf and Aleksandar Peke˘c. Combinatorial auction design. Management Science, 4(11):1485-1503, November 2003. [23] Michael H. Rothkopf, Aleksandar Peke˘c, and Ronald M. Harstad. Computationally manageable combinatorial auctions. Management Science, 44(8):1131-1147, 1998. [24] Daniel Sabin and Eugene C. Freuder. Contradicting conventional wisdom in constraint satisfaction. In A. Cohn, editor, Proceedings of ECAI-94, pages 125-129, 1994. [25] Tuomas Sandholm. Algorithm for optimal winner determination in combinatorial auctions. Artificial Intelligence, 135(1-2):1-54, 2002. [26] Tuomas Sandholm and Victor Lesser. Leveled Commitment Contracts and Strategic Breach. Games and Economic Behavior, 35:212-270, January 2001. [27] Tuomas Sandholm and Victor Lesser. Leveled commitment contracting: A backtracking instrument for multiagent systems. AI Magazine, 23(3):89-100, 2002. [28] Tuomas Sandholm, Sandeep Sikka, and Samphel Norden. Algorithms for optimizing leveled commitment contracts. In Proceedings of the IJCAI-99, pages 535-541. Morgan Kaufmann Publishers Inc., 1999. [29] Tuomas Sandholm and Yunhong Zhou. Surplus equivalence of leveled commitment contracts. Artificial Intelligence, 142:239-264, 2002. [30] William E. Walsh, Michael P. Wellman, and Fredrik Ygge. Combinatorial auctions for supply chain formation. In ACM Conference on Electronic Commerce, pages 260-269, 2000. [31] Rainier Weigel and Christian Bliek. On reformulation of constraint satisfaction problems. In Proceedings of ECAI-98, pages 254-258, 1998. [32] Margaret W. Wiener. Access spectrum bid withdrawal. http://wireless.fcc.gov/auctions/33 /releases/da011719. pdf, July 2001. 192
Robust Solutions for Combinatorial Auctions * ABSTRACT Bids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature. In reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so. Given a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible. We have called this the "Bid-taker's Exposure Problem". When faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue. In this paper, we propose an approach to addressing the Bidtaker's Exposure Problem. Firstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution. A weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes. Secondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker. We then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions. We also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders. Robust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner. 1. INTRODUCTION A combinatorial auction (CA) [5] provides an efficient means of allocating multiple distinguishable items amongst bidders whose perceived valuations for combinations of items differ. Such auctions are gaining in popularity and there is a proliferation in their usage across various industries such as telecoms, B2B procurement and transportation [11, 19]. Revenue is the most obvious optimization criterion for such auctions, but another desirable attribute is solution robustness. In terms of combinatorial auctions, a robust solution is one that can withstand bid withdrawal (a break) by making changes easily to form a repair solution of adequate revenue. A brittle solution to a CA is one in which an unacceptable loss in revenue is unavoidable if a winning bid is withdrawn. In such situations the bid-taker may be left with a set of items deemed to be of low value by all other bidders. These bidders may associate a higher value for these items if they were combined with items already awarded to others, hence the bid-taker is left in an undesirable local optimum in which a form of backtracking is required to reallocate the items in a manner that results in sufficient revenue. We have called this the "Bid-taker's Exposure Problem" that bears similarities to the "Exposure Problem" faced by bidders seeking multiple items in separate single-unit auctions but holding little or no value for a subset of those items. However, reallocating items may be regarded as disruptive to a solution in many real-life scenarios. Consider a scenario where procurement for a business is conducted using a CA. It would be highly undesirable to retract contracts from a group of suppliers because of the failure of a third party. A robust solution that is tolerant of such breaks is preferable. Robustness may be regarded as a preventative measure protecting against future uncertainty by sacrificing revenue in place of solution stability and reparability. We assume a probabilistic approach whereby the bid-taker has knowledge of the reliability of bidders from which the likelihood of an incomplete transaction may be inferred. Repair solutions are required for bids that are seen as brittle (i.e. likely to break). Repairs may also be required for sets of bids deemed brittle. We propose the use of the Weighted Super Solutions (WSS) framework [13] for constraint programming, that is ideal for establishing such robust solutions. As we shall see, this framework can enforce constraints on solutions so that possible breakages are reparable. This paper is organized as follows. Section 2 presents the Winner Determination Problem (WDP) for combinatorial auctions, outlines some possible reasons for bid withdrawal and shows how simply maximizing expected revenue can lead to intolerable revenue losses for risk-averse bid-takers. This motivates the use of robust solutions and Section 3 introduces a constraint programming (CP) framework, Weighted Super Solutions [13], that finds such solutions. We then propose an auction model in Section 4 that enhances reparability by introducing mandatory mutual bid bonds, that may be seen as a form of leveled commitment contract [26, 27]. Section 5 presents an extensive empirical evaluation of the approach presented in this paper, in the context of a number of well-known combinatorial auction distributions, with very encouraging results. Section 6 discusses possible extensions and questions raised by our research that deserve future work. Finally, in Section 7 a number of concluding remarks are made. 2. COMBINATORIAL AUCTIONS Before presenting the technical details of our solution to the "Bid-taker's Exposure Problem", we shall present a brief survey of combinatorial auctions and existing techniques for handling bid withdrawal. Combinatorial auctions involve a single bid-taker allocating multiple distinguishable items amongst a group of bidders. The bidtaker has a set of m items for sale, M = {1, 2,..., m}, and bidders submit a set of bids, B = {B1, B2,..., Bn}. A bid is a tuple Bj = (Sj, pj) where Sj ⊆ M is a subset of the items for sale and pj ≥ 0 is a price. The WDP for a CA is to label all bids as either winning or losing so as to maximize the revenue from winning bids without allocating any item to more than one bid. The following is the integer programming formulation for the WDP: xj ≤ 1, ∀ i ∈ {1...m}, xj ∈ {0, 1}. This problem is NP-complete [23] and inapproximable [25], and is otherwise known as the Set Packing Problem. The above problem formulation assumes the notion of free disposal. This means that the optimal solution need not necessarily sell all of the items. If the auction rules stipulate that all items must be sold, the problem becomes a Set Partition Problem [5]. The WDP has been extensively studied in recent years. The fastest search algorithms that find optimal solutions (e.g. CABOB [25]) can, in practice, solve very large problems involving thousands of bids very quickly. 2.1 The Problem of Bid Withdrawal We assume an auction protocol with a three stage process involving the submission of bids, winner determination, and finally a transaction phase. We are interested in bid withdrawals that occur between the announcement of winning bids and the end of the transaction phase. All bids are valid until the transaction is complete, so we anticipate an expedient transaction process1. 1In some instances the transaction period may be so lengthy that consideration of non-winning bids as still being valid may not be fair. Breaks that occur during a lengthy transaction phase are more difficult to remedy and may require a subsequent auction. For example, if the item is a service contract for a given period of time and the break occurs after partial fulfilment of this contract, the other An example of a winning bid withdrawal occurred in an FCC spectrum auction [32]. Withdrawals, or breaks, may occur for various reasons. Bid withdrawal may be instigated by the bid-taker when Quality of Service agreements are broken or payment deadlines are not met. We refer to bid withdrawal by the bid-taker as item withdrawal in this paper to distinguish between the actions of a bidder and the bid-taker. Harstad and Rothkopf [8] outlined several possibilities for breaks in single item auctions that include: 1. an erroneous initial valuation/bid; 2. unexpected events outside the winning bidder's control; 3. a desire to have the second-best bid honored; 4. information obtained or events that occurred after the auction but before the transaction that reduces the value of an item; 5. the revelation of competing bidders' valuations infers reduced profitability, a problem known as the "Winner's Curse". Kastner et al. [15] examined how to handle perturbations given a solution whilst minimizing necessary changes to that solution. These perturbations may include bid withdrawals, change of valuation/items of a bid or the submission of a new bid. They looked at the problem of finding incremental solutions to restructure a supply chain whose formation is determined using combinatorial auctions [30]. Following a perturbation in the optimal solution they proceed to impose involuntary item withdrawals from winning bidders. They formulated an incremental integer linear program (ILP) that sought to maximize the valuation of the repair solution whilst preserving the previous solution as much as possible. 2.2 Being Proactive against Bid Withdrawal When a bid is withdrawn there may be constraints on how the solution can be repaired. If the bid-taker was freely able to revoke the awarding of items to other bidders then the solution could be repaired easily by reassigning all the items to the optimal solution without the withdrawn bid. Alternatively, the bidder who reneged upon a bid may have all his other bids disqualified and the items could be reassigned based on the optimum solution without that bidder present. However, the bid-taker is often unable to freely reassign the items already awarded to other bidders. When items cannot be withdrawn from winning bidders, following the failure of another bidder to honor his bid, repair solutions are restricted to the set of bids whose items only include those in the bid (s) that were reneged upon. We are free to award items to any of the previously unsuccessful bids when finding a repair solution. When faced with uncertainty over the reliability of bidders a possible approach is to maximize expected revenue. This approach does not make allowances for risk-averse bid-takers who may view a small possibility of very low revenue as unacceptable. Consider the example in Table 1, and the optimal expected revenue in the situation where a single bid may be withdrawn. There are three submitted bids for items A and B, the third being a combination bid for the pair of items at a value of 190. The optimal solution has a value of 200, with the first and second bids as winners. When we consider the probabilities of failure, in the fourth column, the problem of which solution to choose becomes more difficult. Computing the expected revenue for the solution with the first and second bids winning the items, denoted (1, 1, 0), gives: Table 1: Example Combinatorial Auction. If a single bid is withdrawn there is probability of 0.18 of a revenue of 100, given the fact that we cannot withdraw an item from the other winning bidder. The expected revenue for (0, 0, 1) is: We can therefore surmise that the second solution is preferable to the first based on expected revenue. Determining the maximum expected revenue in the presence of such uncertainty becomes computationally infeasible however, as the number of brittle bids grows. A WDP needs to be solved for all possible combinations of bids that may fail. The possible loss in revenue for breaks is also not tightly bounded using this approach, therefore a large loss may be possible for a small number of breaks. Consider the previous example where the bid amount for x3 becomes 175. The expected revenue of (1, 1, 0) (181.75) becomes greater than that of (0, 0, 1) (177.50). There are some bid-takers who may prefer the latter solution because the revenue is never less than 175, but the former solution returns revenue of only 100 with probability 0.18. A risk-averse bid-taker may not tolerate such a possibility, preferring to sacrifice revenue for reduced risk. If we modify our repair search so that a solution of at least a given revenue is guaranteed, the search for a repair solution becomes a satisfiability test rather than an optimization problem. The approaches described above are in contrast to that which we propose in the next section. Our approach can be seen as preventative in that we find an initial allocation of items to bidders which is robust to bid withdrawal. Possible losses in revenue are bounded by a fixed percentage of the true optimal allocation. Perturbations to the original solution are also limited so as to minimize disruption. We regard this as the ideal approach for real-world combinatorial auctions. DEFINITION 1 (ROBUST SOLUTION FOR A CA). A robust solution for a combinatorial auction is one where any subset of successful bids whose probability of withdrawal is greater than or equal to α can be repaired by reassigning items at a cost of at most, Q to other previously losing bids to form a repair solution. Constraints on acceptable revenue, e.g. being a minimum percentage of the optimum, are defined in the problem model and are thus satisfied by all solutions. The maximum cost of repair,, Q, may be a fixed value that may be thought of as a fund for compensating winning bidders whose items are withdrawn from them when creating a repair solution. Alternatively,, Q may be a function of the bids that were withdrawn. Section 4 will give an example of such a mechanism. In the following section we describe an ideal constraint-based framework for the establishment of such robust solutions. 3. FINDING ROBUST SOLUTIONS In constraint programming [4] (CP), a constraint satisfaction problem (CSP) is modeled as a set of n variables X = {x1,..., xn}, a set of domains D = {D (x1),..., D (xn)}, where D (xi) is the set of finite possible values for variable xi and a set C = {C1,..., Cm} of constraints, each restricting the assignments of some subset of the variables in X. Constraint satisfaction involves finding values for each of the problem variables such that all constraints are satisfied. Its main advantages are its declarative nature and flexibility in tackling problems with arbitrary side constraints. Constraint optimization seeks to find a solution to a CSP that optimizes some objective function. A common technique for solving constraint optimization problems is to use branch-and-bound techniques that avoid exploring sub-trees that are known not to contain a better solution than the best found so far. An initial bound can be determined by finding a solution that satisfies all constraints in C or by using some heuristic methods. A classical super solution (SS) is a solution to a CSP in which, if a small number of variables lose their values, repair solutions are guaranteed with only a few changes, thus providing solution robustness [9, 10]. It is a generalization of both fault tolerance in CP [31] and supermodels in propositional satisfiability (SAT) [7]. An (a, b) - super solution is one in which if at most a variables lose their values, a repair solution can be found by changing at most b other variables [10]. Super solutions for combinatorial auctions minimize the number of bids whose status needs to be changed when forming a repair solution [12]. Only a particular set of variables in the solution may be subject to change and these are said to be members of the breakset. For each combination of brittle assignments in the break-set, a repair-set is required that comprises the set of variables whose values must change to provide another solution. The cardinality of the repair set is used to measure the cost of repair. In reality, changing some variable assignments in a repair solution incurs a lower cost than others thereby motivating the use of a different metric for determining the legality of repair sets. The Weighted Super Solution (WSS) framework [13] considers the cost of repair required, rather than simply the number of assignments modified, to form an alternative solution. For CAs this may be a measure of the compensation penalties paid to winning bidders to break existing agreements. Robust solutions are particularly desirable for applications where unreliability is a problem and potential breakages may incur severe penalties. Weighted super solutions offer a means of expressing which variables are easily re-assigned and those that incur a heavy cost [13]. Hebrard et al. [9] describe how some variables may fail (such as machines in a job-shop problem) and others may not. A WSS generalizes this approach so that there is a probability of failure associated with each assignment and sets of variables whose assignments have probabilities of failure greater than or equal to a threshold value, α, require repair solutions. A WSS measures the cost of repairing, or reassigning, other variables using inertia as a metric. Inertia is a measure of a variable's aversion to change and depends on its current assignment, future assignment and the breakage variable (s). It may be desirable to reassign items to different bidders in order to find a repair solution of satisfactory revenue. Compensation may have to be paid to bidders who lose items during the formation of a repair solution. The inertia of a bid reflects the cost of changing its state. For winning bids this may reflect the necessary compensation penalty for the bid-taker to break the agreement (if such breaches are permitted), whereas for previously losing bids this is a free operation. The total amount of compensation payable to bidders may depend upon other factors, such as the cause of the break. There is a limit to how much these overall repair costs should be, and this is given by the value, Q. This value may not be known in advance and unassign x return false end may depend upon the break. Therefore, 3 may be viewed as the fund used to compensate winning bidders for the unilateral withdrawal of their bids by the bid-taker. In summary, an (α,3) - WSS allows any set of variables whose probability of breaking is greater than or equal to α be repaired with changes to the original robust solution with a cost of at most 3. The depth-first search for a WSS (see pseudo-code description in Algorithm 1) maintains arc-consistency [24] at each node of the tree. As search progresses, the reparability of each previous assignment is verified at each node by extending a partial repair solution to the same depth as the current partial solution. This may be thought of as maintaining concurrent search trees for repairs. A repair solution is provided for every possible set of break variables, A. The WSS algorithm attempts to extend the current partial assignment by choosing a variable and assigning it a value. Backtracking may then occur for one of two reasons: we cannot extend the assignment to satisfy the given constraints, or the current partial assignment cannot be associated with a repair solution whose cost of repair is less than 3 should a break occur. The procedure reparable searches for partial repair solutions using backtracking and attempts to extend the last repair found, just as in (1, b) super solutions [9]; the differences being that a repair is provided for a set of breakage variables rather than a single variable and the cost of repair is considered. A summation operator is used to determine the overall cost of repair. If a fixed bound upon the size of any potential break-set can be formed, the WSS algorithm is NPcomplete. For a more detailed description of the WSS search algorithm, the reader is referred to [13], since a complete description of the algorithm is beyond the scope of this paper. EXAMPLE 1. We shall step through the example given in Table 1 when searching for a WSS. Each bid is represented by a single variable with domain values of 0 and 1, the former representing bid-failure and the latter bid-success. The probability of failure of the variables are 0.1 when they are assigned to 1 and 0.0 otherwise. The problem is initially solved using an ILP solver such as lp_solve [3] or CPLEX, and the optimal revenue is found to be 200. A fixed percentage of this revenue can be used as a threshold value for a robust solution and its repairs. The bid-taker wishes to have a robust solution so that if a single winning bid is withdrawn, a repair solution can be formed without withdrawing items from any other winning bidder. This example may be seen as searching for a (0.1,0) - weighted super solution, 3 is 0 because no funds are available to compensate the withdrawal of items from winning bidders. The bid-taker is willing to compromise on revenue, but only by 5%, say, of the optimal value. Bids 1 and 3 cannot both succeed, since they both require item A, so a constraint is added precluding the assignment in which both variables take the value 1. Similarly, bids 2 and 3 cannot both win so another constraint is added between these two variables. Therefore, in this example the set of CSP variables is V = {x1, x2, x3}, whose domains are all {0, 1}. The constraints are x1 + x3 <1, x2 + x3 <1 and Ex. ∈ V aixi ≥ 190, where ai reflects the relevant bid-amounts for the respective bid variables. In order to find a robust solution of optimal revenue we seek to maximize the sum of these amounts, max Ex. ∈ V aixi. When all variables are set to 0 (see Figure 1 (a) branch 3), this is not a solution because the minimum revenue of 190 has not been met, so we try assigning bid3 to 1 (branch 4). This is a valid solution but this variable is brittle because there is a 10% chance that this bid may be withdrawn (see Table 1). Therefore we need to determine if a repair can be formed should it break. The search for a repair begins at the first node, see Figure 1 (b). Notice that value 1 has been removed from bid3 because this search tree is simulating the withdrawal of this bid. When bid1 is set to 0 (branch 4.1), the maximum revenue solution in the remaining subtree has revenue of only 100, therefore search is discontinued at that node of the tree. Bid1 and bid2 are both assigned to 1 (branches 4.2 and 4.4) and the total cost of both these changes is still 0 because no compensation needs to be paid for bids that change from losing to winning. With bid3 now losing (branch 4.5), this gives a repair solution of 200. Hence (0, 0, 1) is reparable and therefore a WSS. We continue our search in Figure 1 (a) however, because we are seeking a robust solution of optimal revenue. When bid1 is assigned to 1 (branch 6) we seek a partial repair for this variable breaking (branch 5 is not considered since it offers insufficient revenue). The repair search sets bid1 to 0 in a separate search tree, (not shown), and control is returned to the search for a WSS. Bid2 is set to 0 (branch 7), but this solution would not produce sufficient revenue so bid2 is then set to 1 (branch 8). We then attempt to extend the repair for bid1 (not shown). This fails because the repair for bid1 cannot assign bid2 to 0 because the cost of repairing such an assignment would be ∞, given that the auction rules do not permit the withdrawal of items from winning bids. A repair for bid1 breaking is therefore not possible because items have already been awarded to bid2. A repair solution with bid2 assigned to 1 does not produce sufficient revenue when bid1 is assigned to 0. The inability to withdraw items from winning bids implies that (1, 1, 0) is an irreparable solution when the minimum tolerable revenue is greater than 100. The italicized comments and dashed line in Figure 1 (a) illustrate the search path for a WSS if both of these bids were deemed reparable. 0 Section 4 introduces an alternative auction model that will allow the bid-taker to receive compensation for breakages and in turn use this payment to compensate other bidders for withdrawal of items from winning bids. This will enable the reallocation of items and permit the establishment of (1, 1, 0) as a second WSS for this example. 4. MUTUAL BID BONDS: A BACKTRACKING MECHANISM Some auction solutions are inherently brittle and it may be impossible to find a robust solution. If we can alter the rules of an auction so that the bid-taker can retract items from winning bidders, then the reparability of solutions to such auctions may be improved. In this section we propose an auction model that permits bid and item withdrawal by the bidders and bid-taker, respectively. We propose a model that incorporates mutual bid bonds to enable solution reparability for the bid-taker, a form of insurance against Figure 1: Search Tree for a WSS without item withdrawal. the winner's curse for the bidder whilst also compensating bidders in the case of item withdrawal from winning bids. We propose that such "Winner's Curse & Bid-taker's Exposure" insurance comprise a fixed percentage, r,, of the bid amount for all bids. Such mutual bid bonds are mandatory for each bid in our model2. The conditions attached to the bid bonds are that the bid-taker be allowed to annul winning bids (item withdrawal) when repairing breaks elsewhere in the solution. In the interests of fairness, compensation is paid to bidders from whom items are withdrawn and is equivalent to the penalty that would have been imposed on the bidder should he have withdrawn the bid. Combinatorial auctions impose a heavy computational burden on the bidder so it is important that the hedging of risk should be a simple and transparent operation for the bidder so as not to further increase this burden unnecessarily. We also contend that it is imperative that the bidder knows the potential penalty for withdrawal in advance of bid submission. This information is essential for bidders when determining how aggressive they should be in their bidding strategy. Bid bonds are commonplace in procurement for construction projects. Usually they are mandatory for all bids, are a fixed percentage, r,, of the bid amount and are unidirectional in that item withdrawal by the bid-taker is not permitted. Mutual bid bonds may be seen as a form of leveled commitment contract in which both parties may break the contract for the same fixed penalty. Such contracts permit unilateral decommitment for prespecified penalties. Sandholm et al. showed that this can increase the expected payoffs of all parties and enables deals that would be impossible under full commitment [26, 28, 29]. In practice a bid bond typically ranges between 5 and 20% of the 2Making the insurance optional may be beneficial in some instances. If a bidder does not agree to the insurance, it may be inferred that he may have accurately determined the valuation for the items and therefore less likely to fall victim to the winner's curse. The probability of such a bid being withdrawn maybe less, so a repair solution maybe deemed unnecessary for this bid. On the other hand it decreases the reparability of solutions. bid amount [14, 18]. If the decommitment penalties are the same for both parties in all bids, r, does not influence the reparability of a given set of bids. It merely influences the levels of penalties and compensation transacted by agents. Low values of r, incur low bid withdrawal penalties and simulate a dictatorial bid-taker who does not adequately compensate bidders for item withdrawal. Andersson and Sandholm [1] found that myopic agents reach a higher social welfare quicker if they act selfishly rather than cooperatively when penalties in leveled commitment contracts are low. Increased levels of bid withdrawal are likely when the penalties are low also. High values of r, tend towards full-commitment and reduce the advantages of such "Winner's Curse & Bid-taker's Exposure" insurance. The penalties paid are used to fund a reassignment of items to form a repair solution of sufficient revenue by compensating previously successful bidders for withdrawal of the items from them. EXAMPLE 2. Consider the example given in Table 1 once more, where the bids also comprise a mutual bid bond of 5% of the bid amount. If a bid is withdrawn, the bidder forfeits this amount and the bid-taker can then compensate winning bidders whose items are withdrawn when trying to form a repair solution later. The search for repair solutions for breaks to bid1 and bid2 appear in Figures 2 (a) and 2 (b), respectively3. When bid1 breaks, there is a compensation penalty paid to the bid-taker equal to 5 that can be used to fund a reassignment of the items. We therefore set, Q to 5 and this becomes the maximum expenditure allowed to withdraw items from winning bidders. , Q may also be viewed as the size of the fund available to facilitate backtracking by the bid-taker. When we extend the partial repair for bid1 so that bid2 loses an item (branch 8.1), the overall cost of repair increases to 5, due to this item withdrawal by the bid-taker, 3The actual implementation of WSS search checks previous solutions to see if they can repair breaks before searching for a new repair solution. (0, 0, 1) is a solution that has already been found so the search for a repair in this example is not strictly necessary but is described for pedagogical reasons. Figure 2: Repair Search Tree for breaks 1 and 2, κ = 0.05. and is just within the limit given by β. In Figure 1 (a) the search path follows the dashed line and sets bid3 to be 0 (branch 9). The repair solutions for bids 1 and 2 can be extended further by assigning bid3 to 1 (branches 9.2 and 9.4). Therefore, (1, 1, 0) maybe considered a robust solution. Recall, that previously this was not the case. 0 Using mutual bid bonds thus increases reparability and allows a robust solution of revenue 200 as opposed to 190, as was previously the case. 5. EXPERIMENTS We have used the Combinatorial Auction Test Suite (CATS) [16] to generate sample auction data. We generated 100 instances of problems in which there are 20 items for sale and 100-2000 bids that may be dominated in some instances4. Such dominated bids can participate in repair solutions although they do not feature in optimal solutions. CATS uses economically motivated bidding patterns to generate auction data in various scenarios. To motivate the research presented in this paper we use sensitivity analysis to examine the brittleness of optimal solutions and hence determine the types of auctions most likely to benefit from a robust solution. We then establish robust solutions for CAs using the WSS framework. 5.1 Sensitivity Analysis for the WDP We have performed sensitivity analysis of the following four distributions: airport take-off/landing slots (matching), electronic components (arbitrary), property/spectrum-rights (regions) and transportation (paths). These distributions were chosen because they describe a broad array of bidding patterns in different application domains. The method used is as follows. We first of all determined the optimal solution using lp_solve, a mixed integer linear program solver [3]. We then simulated a single bid withdrawal and re-solved the problem with the other winning bids remaining fixed, i.e. there were no involuntary dropouts. The optimal repair solution was then determined. This process is repeated for all winning bids in the overall optimal solution, thus assuming that all bids are brittle. Figure 3 shows the average revenue of such repair solutions as a percentage of the optimum. Also shown is the average worst-case scenario over 100 auctions. We also implemented an auction rule that disallows bids from the reneging bidder participate in a repair5. Figure 3 (a) illustrates how the paths distribution is inherently the most robust distribution since when any winning bid is withdrawn the solution can be repaired to achieve over 98.5% of the optimal revenue on average for auctions with more than 250 bids. There are some cases however when such withdrawals result in solutions whose revenue is significantly lower than optimum. Even in auctions with as many as 2000 bids there are occasions when a single bid withdrawal can result in a drop in revenue of over 5%, although the average worst-case drop in revenue is only 1%. Figure 3 (b) shows how the matching distribution is more brittle on average than paths and also has an inferior worst-case revenue on average. This trend continues as the regions-npv (Figure 3 (c)) and arbitrary-npv (Figure 3 (d)) distributions are more brittle still. These distributions are clearly sensitive to bid withdrawal when no other winning bids in the solution may be involuntarily withdrawn by the bid-taker. 5.2 Robust Solutions using WSS In this section we focus upon both the arbitrary-npv and regions-npv distributions because the sensitivity analysis indicated that these types of auctions produce optimal solutions that tend to be most brittle, and therefore stand to benefit most from solution robustness. We ignore the auctions with 2000 bids because the sensitivity analysis has indicated that these auctions are inherently robust with a very low average drop in revenue following a bid withdrawal. They would also be very computationally expensive, given the extra complexity of finding robust solutions. A pure CP approach needs to be augmented with global constraints that incorporate operations research techniques to increase pruning sufficiently so that thousands of bids may be examined. Global constraints exploit special-purpose filtering algorithms to improve performance [21]. There are a number of ways to speed up the search for a weighted super solution in a CA, although this is not the main focus of our current work. Polynomial matching algorithms may be used in auctions whose bid length is short, such as those for airport landing/take-off slots for example. The integer programming formulation of the WDP stipulates that a bid either loses or wins. If we relax this constraint so that bids can partially win, this corresponds to the linear relaxation of the problem and is solvable in polynomial time. At each node of the search tree we can quickly solve the linear relaxation of the remaining problem in the subtree below the current node to establish an upper bound on remaining revenue. If this upper bound plus revenue in the parent tree is less than the current lower bound on revenue, search at that node can cease. The (continuous) LP relaxation thus provides a vital speed-up in the search for weighted super solutions, which we have exploited in our implementation. The LP formulation is as follows: Figure 3: Sensitivity of bid distributions to single bid withdrawal. Additional techniques, that are outlined in [25], can aid the scalability of a CP approach but our main aim in these experiments is to examine the robustness of various auction distributions and consider the tradeoff between robustness and revenue. The WSS solver we have developed is an extension of the super solution solver presented in [9, 10]. This solver is, in turn, based upon the EFC constraint solver [2]. Combinatorial auctions are easily modeled as a constraint optimization problems. We have chosen the branch-on-bids formulation because in tests it worked faster than a branch-on-items formulation for the arbitrary-npv and regions-npv distributions. All variables are binary and our search mechanism uses a reverse lexicographic value ordering heuristic. This complements our dynamic variable ordering heuristic that selects the most promising unassigned variable as the next one in the search tree. We use the product of the solution of the LP relaxation and the degree of a variable to determine the likelihood of its participation in a robust solution. High values in the LP solution are a strong indication of variables most likely to form a high revenue solution whilst the a variable's degree reflects the number of other bids that overlap in terms of desired items. Bids for large numbers of items tend to be more robust, which is why we weight our robust solution search in this manner. We found this heuristic to be slightly more effective than the LP solution alone. As the number of bids in the auction increases however, there is an increase in the inherent robustness of solutions so the degree of a variable loses significance as the auction size increases. 5.3 Results Our experiments simulate three different constraints on repair solutions. The first is that no winning bids are withdrawn by the bid-taker and a repair solution must return a revenue of at least 90% of the optimal overall solution. Secondly, we relaxed the revenue constraint to 85% of optimum. Thirdly, we allowed backtracking by the bid-taker on winning bids using mutual bid bonds but maintaining the revenue constraint at 90% of optimum. Prior to finding a robust solution we solved the WDP optimally using lp_solve [3]. We then set the minimum tolerable revenue for a solution to be 90% (then 85%) of the revenue of this optimal solution. We assumed that all bids were brittle, thus a repair solution is required for every bid in the solution. Initially we assume that no backtracking was permitted on assignments of items to other winning bids given a bid withdrawal elsewhere in the solution. Table 2 shows the percentage of optimal solutions that are robust for minimum revenue constraints for repair solutions of 90% and 85% of optimal revenue. Relaxing the revenue constraint on repair solutions to 85% of the optimum revenue greatly increases the number of optimal solutions that are robust. We also conducted experiments on the same auctions in which backtracking by the bid-taker is permitted using mutual bid bonds. This significantly improves the reparability of optimal solutions whilst still maintaining repair solutions of 90% of optimum. An interesting feature of the arbitrary-npv distribution is that optimal solutions can become more brittle as the number of bids increases. The reason for this is that optimal solutions for larger auctions have more winning bids. Some of the optimal solutions for the smallest auctions with 100 bids have only one winning bidder. If this bid is withdrawn it is usually easy to find a new repair solution within 90% of the previous optimal revenue. Also, repair solutions for bids that contain a small number of items may be made difficult by the fact that a reduced number of bids cover only a subset of those items. A mitigating factor is that such bids form a smaller percentage of the revenue of the optimal solution on average. We also implemented a rule stipulating that any losing bids from Table 2: Optimal Solutions that are Inherently Robust (%). Table 3: Occurrence of Robust Solutions (%). a withdrawing bidder cannot participate in a repair solution. This acts as a disincentive for strategic withdrawal and was also used previously in the sensitivity analysis. In some auctions, a robust solution may not exist. Table 3 shows the percentage of auctions that support robust solutions for the arbitrary-npv and regions - npv distributions. It is clear that finding robust solutions for the former distribution is particularly difficult for auctions with 250 and 500 bids when revenue constraints are 90% of optimum. This difficulty was previously alluded to by the low percentage of optimal solutions that were robust for these auctions. Relaxing the revenue constraint helps increase the percentage of auctions in which robust solutions are achievable to 88% and 94%, respectively. This improves the reparability of all solutions thereby increasing the average revenue of the optimal robust solution. It is somewhat counterintuitive to expect a reduction in reparability of auction solutions as the number of bids increases because there tends to be an increased number of solutions above a revenue threshold in larger auctions. The MBB auction model performs very well however, and ensures that robust solutions are achievable for such inherently brittle auctions without sacrificing over 10% of optimal revenue to achieve repair solutions. Figure 4 shows the average revenue of the optimal robust solution as a percentage of the overall optimum. Repair solutions found for a WSS provide a lower bound on possible revenue following a bid withdrawal. Note that in some instances it is possible for a repair solution to have higher revenue than the original solution. When backtracking on winning bids by the bid-taker is disallowed, this can only happen when the repair solution includes two or more bids that were not in the original. Otherwise the repair bids would participate in the optimal robust solution in place of the bid that was withdrawn. A WSS guarantees minimum levels of revenue for repair solutions but this is not to say that repair solutions cannot be improved upon. It is possible to use an incremental algorithm to Figure 4: Revenue of optimal robust solutions. determine an optimal repair solution following a break, whilst safe in the knowledge that in advance of any possible bid withdrawal we can establish a lower bound on the revenue of a repair. Kastner et al. have provided such an incremental ILP formulation [15]. Mutual bid bonds facilitate backtracking by the bid-taker on already assigned items. This improves the reparability of all possible solutions thus increasing the revenue of the optimal robust solution on average. Figure 4 shows the increase in revenue of robust solutions in such instances. The revenues of repair solutions are bounded by at least 90% of the optimum in our experiments thereby allowing a direct comparison with robust solutions already found using the same revenue constraint but not providing for backtracking. It is immediately obvious that such a mechanism can significantly increase revenue whilst still maintaining solution robustness. Table 4 shows the number of winning bids participating in optimal and optimal robust solutions given the three different constraints on repairing solutions listed at the beginning of this section. As the number of bids increases, more of the optimal overall solutions are robust. This leads to a convergence in the number of winning bids. The numbers in brackets are derived from the sensitivity analysis of optimal solutions that reveals the fact that almost all optimal solutions for auctions of 2000 bids are robust. We can therefore infer that the average number of winning bids in revenuemaximizing robust solutions converges towards that of the optimal overall solutions. A notable side-effect of robust solutions is that fewer bids participate in the solutions. It can be clearly seen from Table 4 that when revenue constraints on repair solutions are tight, there are fewer winning bids in the optimal robust solution on average. This is particularly pronounced for smaller auctions in both distributions. This can win benefits for the bid-taker such as reduced overheads in dealing with fewer suppliers. Although MBBs aid solution repara Table 4: Number of winning bids. bility, the number of bids in the solutions increases on average. This is to be expected because a greater fraction of these solutions are in fact optimal, as we saw in Table 2. 7. CONCLUSION Fairness is often cited as a reason for choosing the optimal solution in terms of revenue only [22]. Robust solutions militate against bids deemed brittle, therefore bidders must earn a reputation for being reliable to relax the reparability constraint attached to their bids. This may be seen as being fair to long-standing business partners whose reliability is unquestioned. Internet-based auctions are often seen as unwelcome price-gouging exercises by suppliers in many sectors [6, 17]. Traditional business partnerships are being severed by increased competition amongst suppliers. Quality of Service can suffer because of the increased focus on short-term profitability to the detriment of the bid-taker in the long-term. Robust solutions can provide a means of selectively discriminating against distrusted bidders in a measured manner. As combinatorial auction deployment moves from large value auctions with a small pool of trusted bidders (e.g. spectrum-rights sales) towards lower value auctions with potentially unknown bidders (e.g. Supply Chain Management [30]), solution robustness becomes more relevant. As well as being used to ensure that the bid-taker is not left vulnerable to bid withdrawal, it may also be used to cement relationships with preferred, possibly incumbent, suppliers. We have shown that it is possible to attain robust solutions for CAs with only a small loss in revenue. We have also illustrated how such solutions tend to have fewer winning bids than overall optimal solutions, thereby reducing any overheads associated with dealing with more bidders. We have also demonstrated that introducing mutual bid bonds, a form of leveled commitment contract, can significantly increase the revenue of optimal robust solutions by improving reparability. We contend that robust solutions using such a mechanism can allow a bid-taker to offer the possibility of bid withdrawal to bidders whilst remaining confident about postrepair revenue and also facilitating increased bidder aggressiveness.
Robust Solutions for Combinatorial Auctions * ABSTRACT Bids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature. In reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so. Given a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible. We have called this the "Bid-taker's Exposure Problem". When faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue. In this paper, we propose an approach to addressing the Bidtaker's Exposure Problem. Firstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution. A weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes. Secondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker. We then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions. We also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders. Robust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner. 1. INTRODUCTION A combinatorial auction (CA) [5] provides an efficient means of allocating multiple distinguishable items amongst bidders whose perceived valuations for combinations of items differ. Such auctions are gaining in popularity and there is a proliferation in their usage across various industries such as telecoms, B2B procurement and transportation [11, 19]. Revenue is the most obvious optimization criterion for such auctions, but another desirable attribute is solution robustness. In terms of combinatorial auctions, a robust solution is one that can withstand bid withdrawal (a break) by making changes easily to form a repair solution of adequate revenue. A brittle solution to a CA is one in which an unacceptable loss in revenue is unavoidable if a winning bid is withdrawn. In such situations the bid-taker may be left with a set of items deemed to be of low value by all other bidders. These bidders may associate a higher value for these items if they were combined with items already awarded to others, hence the bid-taker is left in an undesirable local optimum in which a form of backtracking is required to reallocate the items in a manner that results in sufficient revenue. We have called this the "Bid-taker's Exposure Problem" that bears similarities to the "Exposure Problem" faced by bidders seeking multiple items in separate single-unit auctions but holding little or no value for a subset of those items. However, reallocating items may be regarded as disruptive to a solution in many real-life scenarios. Consider a scenario where procurement for a business is conducted using a CA. It would be highly undesirable to retract contracts from a group of suppliers because of the failure of a third party. A robust solution that is tolerant of such breaks is preferable. Robustness may be regarded as a preventative measure protecting against future uncertainty by sacrificing revenue in place of solution stability and reparability. We assume a probabilistic approach whereby the bid-taker has knowledge of the reliability of bidders from which the likelihood of an incomplete transaction may be inferred. Repair solutions are required for bids that are seen as brittle (i.e. likely to break). Repairs may also be required for sets of bids deemed brittle. We propose the use of the Weighted Super Solutions (WSS) framework [13] for constraint programming, that is ideal for establishing such robust solutions. As we shall see, this framework can enforce constraints on solutions so that possible breakages are reparable. This paper is organized as follows. Section 2 presents the Winner Determination Problem (WDP) for combinatorial auctions, outlines some possible reasons for bid withdrawal and shows how simply maximizing expected revenue can lead to intolerable revenue losses for risk-averse bid-takers. This motivates the use of robust solutions and Section 3 introduces a constraint programming (CP) framework, Weighted Super Solutions [13], that finds such solutions. We then propose an auction model in Section 4 that enhances reparability by introducing mandatory mutual bid bonds, that may be seen as a form of leveled commitment contract [26, 27]. Section 5 presents an extensive empirical evaluation of the approach presented in this paper, in the context of a number of well-known combinatorial auction distributions, with very encouraging results. Section 6 discusses possible extensions and questions raised by our research that deserve future work. Finally, in Section 7 a number of concluding remarks are made. 2. COMBINATORIAL AUCTIONS 2.1 The Problem of Bid Withdrawal 2.2 Being Proactive against Bid Withdrawal 3. FINDING ROBUST SOLUTIONS 4. MUTUAL BID BONDS: A BACKTRACKING MECHANISM 5. EXPERIMENTS 5.1 Sensitivity Analysis for the WDP 5.2 Robust Solutions using WSS 5.3 Results 7. CONCLUSION Fairness is often cited as a reason for choosing the optimal solution in terms of revenue only [22]. Robust solutions militate against bids deemed brittle, therefore bidders must earn a reputation for being reliable to relax the reparability constraint attached to their bids. This may be seen as being fair to long-standing business partners whose reliability is unquestioned. Internet-based auctions are often seen as unwelcome price-gouging exercises by suppliers in many sectors [6, 17]. Traditional business partnerships are being severed by increased competition amongst suppliers. Quality of Service can suffer because of the increased focus on short-term profitability to the detriment of the bid-taker in the long-term. Robust solutions can provide a means of selectively discriminating against distrusted bidders in a measured manner. As combinatorial auction deployment moves from large value auctions with a small pool of trusted bidders (e.g. spectrum-rights sales) towards lower value auctions with potentially unknown bidders (e.g. Supply Chain Management [30]), solution robustness becomes more relevant. As well as being used to ensure that the bid-taker is not left vulnerable to bid withdrawal, it may also be used to cement relationships with preferred, possibly incumbent, suppliers. We have shown that it is possible to attain robust solutions for CAs with only a small loss in revenue. We have also illustrated how such solutions tend to have fewer winning bids than overall optimal solutions, thereby reducing any overheads associated with dealing with more bidders. We have also demonstrated that introducing mutual bid bonds, a form of leveled commitment contract, can significantly increase the revenue of optimal robust solutions by improving reparability. We contend that robust solutions using such a mechanism can allow a bid-taker to offer the possibility of bid withdrawal to bidders whilst remaining confident about postrepair revenue and also facilitating increased bidder aggressiveness.
Robust Solutions for Combinatorial Auctions * ABSTRACT Bids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature. In reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so. Given a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible. We have called this the "Bid-taker's Exposure Problem". When faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue. In this paper, we propose an approach to addressing the Bidtaker's Exposure Problem. Firstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution. A weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes. Secondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker. We then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions. We also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders. Robust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner. 1. INTRODUCTION A combinatorial auction (CA) [5] provides an efficient means of allocating multiple distinguishable items amongst bidders whose perceived valuations for combinations of items differ. Revenue is the most obvious optimization criterion for such auctions, but another desirable attribute is solution robustness. In terms of combinatorial auctions, a robust solution is one that can withstand bid withdrawal (a break) by making changes easily to form a repair solution of adequate revenue. A brittle solution to a CA is one in which an unacceptable loss in revenue is unavoidable if a winning bid is withdrawn. In such situations the bid-taker may be left with a set of items deemed to be of low value by all other bidders. We have called this the "Bid-taker's Exposure Problem" that bears similarities to the "Exposure Problem" faced by bidders seeking multiple items in separate single-unit auctions but holding little or no value for a subset of those items. However, reallocating items may be regarded as disruptive to a solution in many real-life scenarios. Consider a scenario where procurement for a business is conducted using a CA. A robust solution that is tolerant of such breaks is preferable. Robustness may be regarded as a preventative measure protecting against future uncertainty by sacrificing revenue in place of solution stability and reparability. We assume a probabilistic approach whereby the bid-taker has knowledge of the reliability of bidders from which the likelihood of an incomplete transaction may be inferred. Repair solutions are required for bids that are seen as brittle (i.e. likely to break). Repairs may also be required for sets of bids deemed brittle. We propose the use of the Weighted Super Solutions (WSS) framework [13] for constraint programming, that is ideal for establishing such robust solutions. As we shall see, this framework can enforce constraints on solutions so that possible breakages are reparable. This paper is organized as follows. Section 2 presents the Winner Determination Problem (WDP) for combinatorial auctions, outlines some possible reasons for bid withdrawal and shows how simply maximizing expected revenue can lead to intolerable revenue losses for risk-averse bid-takers. This motivates the use of robust solutions and Section 3 introduces a constraint programming (CP) framework, Weighted Super Solutions [13], that finds such solutions. We then propose an auction model in Section 4 that enhances reparability by introducing mandatory mutual bid bonds, that may be seen as a form of leveled commitment contract [26, 27]. Section 5 presents an extensive empirical evaluation of the approach presented in this paper, in the context of a number of well-known combinatorial auction distributions, with very encouraging results. Section 6 discusses possible extensions and questions raised by our research that deserve future work. Finally, in Section 7 a number of concluding remarks are made. 7. CONCLUSION Fairness is often cited as a reason for choosing the optimal solution in terms of revenue only [22]. Robust solutions militate against bids deemed brittle, therefore bidders must earn a reputation for being reliable to relax the reparability constraint attached to their bids. This may be seen as being fair to long-standing business partners whose reliability is unquestioned. Internet-based auctions are often seen as unwelcome price-gouging exercises by suppliers in many sectors [6, 17]. Traditional business partnerships are being severed by increased competition amongst suppliers. Robust solutions can provide a means of selectively discriminating against distrusted bidders in a measured manner. We have shown that it is possible to attain robust solutions for CAs with only a small loss in revenue. We have also illustrated how such solutions tend to have fewer winning bids than overall optimal solutions, thereby reducing any overheads associated with dealing with more bidders. We have also demonstrated that introducing mutual bid bonds, a form of leveled commitment contract, can significantly increase the revenue of optimal robust solutions by improving reparability. We contend that robust solutions using such a mechanism can allow a bid-taker to offer the possibility of bid withdrawal to bidders whilst remaining confident about postrepair revenue and also facilitating increased bidder aggressiveness.
J-42
The Dynamics of Viral Marketing
We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.
[ "viral market", "viral market", "recommend network", "product", "stochast model", "purchas", "price categori", "advertis", "consum", "direct multi graph", "probabl", "connect individu", "e-commerc", "recommend system" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "U", "U", "U", "M" ]
The Dynamics of Viral Marketing ∗ Jure Leskovec † Carnegie Mellon University Pittsburgh, PA 15213 jure@cs.cmu.edu Lada A. Adamic ‡ University of Michigan Ann Arbor, MI 48109 ladamic@umich.edu Bernardo A. Huberman HP Labs Palo Alto, CA 94304 bernardo.huberman@hp.com ABSTRACT We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective. Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics 1. INTRODUCTION With consumers showing increasing resistance to traditional forms of advertising such as TV or newspaper ads, marketers have turned to alternate strategies, including viral marketing. Viral marketing exploits existing social networks by encouraging customers to share product information with their friends. Previously, a few in depth studies have shown that social networks affect the adoption of individual innovations and products (for a review see [15] or [16]). But until recently it has been difficult to measure how influential person-to-person recommendations actually are over a wide range of products. We were able to directly measure and model the effectiveness of recommendations by studying one online retailer``s incentivised viral marketing program. The website gave discounts to customers recommending any of its products to others, and then tracked the resulting purchases and additional recommendations. Although word of mouth can be a powerful factor influencing purchasing decisions, it can be tricky for advertisers to tap into. Some services used by individuals to communicate are natural candidates for viral marketing, because the product can be observed or advertised as part of the communication. Email services such as Hotmail and Yahoo had very fast adoption curves because every email sent through them contained an advertisement for the service and because they were free. Hotmail spent a mere $50,000 on traditional marketing and still grew from zero to 12 million users in 18 months [7]. Google``s Gmail captured a significant part of market share in spite of the fact that the only way to sign up for the service was through a referral. Most products cannot be advertised in such a direct way. At the same time the choice of products available to consumers has increased manyfold thanks to online retailers who can supply a much wider variety of products than traditional brick-and-mortar stores. Not only is the variety of products larger, but one observes a `fat tail'' phenomenon, where a large fraction of purchases are of relatively obscure items. On Amazon.com, somewhere between 20 to 40 percent of unit sales fall outside of its top 100,000 ranked products [2]. Rhapsody, a streaming-music service, streams more tracks outside than inside its top 10,000 tunes [1]. Effectively advertising these niche products using traditional advertising approaches is impractical. Therefore using more targeted marketing approaches is advantageous both to the merchant and the consumer, who would benefit from learning about new products. The problem is partly addressed by the advent of online product and merchant reviews, both at retail sites such as EBay and Amazon, and specialized product comparison sites such as Epinions and CNET. Quantitative marketing techniques have been proposed [12], and the rating of products and merchants has been shown to effect the likelihood of an item being bought [13, 4]. Of further help to the consumer are collaborative filtering recommendations of the form people who bought x also bought y feature [11]. These refinements help consumers discover new products 228 and receive more accurate evaluations, but they cannot completely substitute personalized recommendations that one receives from a friend or relative. It is human nature to be more interested in what a friend buys than what an anonymous person buys, to be more likely to trust their opinion, and to be more influenced by their actions. Our friends are also acquainted with our needs and tastes, and can make appropriate recommendations. A Lucid Marketing survey found that 68% of individuals consulted friends and relatives before purchasing home electronics - more than the half who used search engines to find product information [3]. Several studies have attempted to model just this kind of network influence. Richardson and Domingos [14] used Epinions'' trusted reviewer network to construct an algorithm to maximize viral marketing efficiency assuming that individuals'' probability of purchasing a product depends on the opinions on the trusted peers in their network. Kempe, Kleinberg and Tardos [8] evaluate the efficiency of several algorithms for maximizing the size of influence set given various models of adoption. While these models address the question of maximizing the spread of influence in a network, they are based on assumed rather than measured influence effects. In contrast, in our study we are able to directly observe the effectiveness of person to person word of mouth advertising for hundreds of thousands of products for the first time. We find that most recommendation chains do not grow very large, often terminating with the initial purchase of a product. However, occasionally a product will propagate through a very active recommendation network. We propose a simple stochastic model that seems to explain the propagation of recommendations. Moreover, the characteristics of recommendation networks influence the purchase patterns of their members. For example, individuals'' likelihood of purchasing a product initially increases as they receive additional recommendations for it, but a saturation point is quickly reached. Interestingly, as more recommendations are sent between the same two individuals, the likelihood that they will be heeded decreases. We also propose models to identify products for which viral marketing is effective: We find that the category and price of product plays a role, with recommendations of expensive products of interest to small, well connected communities resulting in a purchase more often. We also observe patterns in the timing of recommendations and purchases corresponding to times of day when people are likely to be shopping online or reading email. We report on these and other findings in the following sections. 2. THE RECOMMENDATION NETWORK 2.1 Dataset description Our analysis focuses on the recommendation referral program run by a large retailer. The program rules were as follows. Each time a person purchases a book, music, or a movie he or she is given the option of sending emails recommending the item to friends. The first person to purchase the same item through a referral link in the email gets a 10% discount. When this happens the sender of the recommendation receives a 10% credit on their purchase. The recommendation dataset consists of 15,646,121 recommendations made among 3,943,084 distinct users. The data was collected from June 5 2001 to May 16 2003. In total, 548,523 products were recommended, 99% of them belonging to 4 main product groups: Books, DVDs, Music and Videos. In addition to recommendation data, we also crawled the retailer``s website to obtain product categories, reviews and ratings for all products. Of the products in our data set, 5813 (1%) were discontinued (the retailer no longer provided any information about them). Although the data gives us a detailed and accurate view of recommendation dynamics, it does have its limitations. The only indication of the success of a recommendation is the observation of the recipient purchasing the product through the same vendor. We have no way of knowing if the person had decided instead to purchase elsewhere, borrow, or otherwise obtain the product. The delivery of the recommendation is also somewhat different from one person simply telling another about a product they enjoy, possibly in the context of a broader discussion of similar products. The recommendation is received as a form email including information about the discount program. Someone reading the email might consider it spam, or at least deem it less important than a recommendation given in the context of a conversation. The recipient may also doubt whether the friend is recommending the product because they think the recipient might enjoy it, or are simply trying to get a discount for themselves. Finally, because the recommendation takes place before the recommender receives the product, it might not be based on a direct observation of the product. Nevertheless, we believe that these recommendation networks are reflective of the nature of word of mouth advertising, and give us key insights into the influence of social networks on purchasing decisions. 2.2 Recommendation network statistics For each recommendation, the dataset included the product and product price, sender ID, receiver ID, the sent date, and a buy-bit, indicating whether the recommendation resulted in a purchase and discount. The sender and receiver ID``s were shadowed. We represent this data set as a directed multi graph. The nodes represent customers, and a directed edge contains all the information about the recommendation. The edge (i, j, p, t) indicates that i recommended product p to customer j at time t. The typical process generating edges in the recommendation network is as follows: a node i first buys a product p at time t and then it recommends it to nodes j1, ... , jn. The j nodes can they buy the product and further recommend it. The only way for a node to recommend a product is to first buy it. Note that even if all nodes j buy a product, only the edge to the node jk that first made the purchase (within a week after the recommendation) will be marked by a buy-bit. Because the buy-bit is set only for the first person who acts on a recommendation, we identify additional purchases by the presence of outgoing recommendations for a person, since all recommendations must be preceded by a purchase. We call this type of evidence of purchase a buyedge. Note that buy-edges provide only a lower bound on the total number of purchases without discounts. It is possible for a customer to not be the first to act on a recommendation and also to not recommend the product to others. Unfortunately, this was not recorded in the data set. We consider, however, the buy-bits and buy-edges as proxies for the total number of purchases through recommendations. For each product group we took recommendations on all products from the group and created a network. Table 1 229 0 1 2 3 4 x 10 6 0 2 4 6 8 10 12 x 10 4 number of nodes sizeofgiantcomponent by month quadratic fit 0 10 20 0 2 4 x 10 6 m (month) n # nodes 1.7*10 6 m 10 0 10 1 10 2 10 3 10 1 10 2 10 3 10 4 10 5 10 6 kp (recommendations by a person for a product) N(x>=k p ) level 0 γ = 2.6 level 1 γ = 2.0 level 2 γ = 1.5 level 3 γ = 1.2 level 4 γ = 1.2 (a) Network growth (b) Recommending by level Figure 1: (a) The size of the largest connected component of customers over time. The inset shows the linear growth in the number of customers n over time. (b) The number of recommendations sent by a user with each curve representing a different depth of the user in the recommendation chain. A power law exponent γ is fitted to all but the tail. (first 7 columns) shows the sizes of various product group recommendation networks with p being the total number of products in the product group, n the total number of nodes spanned by the group recommendation network and e the number of edges (recommendations). The column eu shows the number of unique edges - disregarding multiple recommendations between the same source and recipient. In terms of the number of different items, there are by far the most music CDs, followed by books and videos. There is a surprisingly small number of DVD titles. On the other hand, DVDs account for more half of all recommendations in the dataset. The DVD network is also the most dense, having about 10 recommendations per node, while books and music have about 2 recommendations per node and videos have only a bit more than 1 recommendation per node. Music recommendations reached about the same number of people as DVDs but used more than 5 times fewer recommendations to achieve the same coverage of the nodes. Book recommendations reached by far the most people - 2.8 million. Notice that all networks have a very small number of unique edges. For books, videos and music the number of unique edges is smaller than the number of nodes - this suggests that the networks are highly disconnected [5]. Figure 1(a) shows the fraction of nodes in largest weakly connected component over time. Notice the component is very small. Even if we compose a network using all the recommendations in the dataset, the largest connected component contains less than 2.5% (100,420) of the nodes, and the second largest component has only 600 nodes. Still, some smaller communities, numbering in the tens of thousands of purchasers of DVDs in categories such as westerns, classics and Japanese animated films (anime), had connected components spanning about 20% of their members. The insert in figure 1(a) shows the growth of the customer base over time. Surprisingly it was linear, adding on average 165,000 new users each month, which is an indication that the service itself was not spreading epidemically. Further evidence of non-viral spread is provided by the relatively high percentage (94%) of users who made their first recommendation without having previously received one. Back to table 1: given the total number of recommendations e and purchases (bb + be) influenced by recommendations we can estimate how many recommendations need to be independently sent over the network to induce a new purchase. Using this metric books have the most influential recommendations followed by DVDs and music. For books one out of 69 recommendations resulted in a purchase. For DVDs it increases to 108 recommendations per purchase and further increases to 136 for music and 203 for video. Even with these simple counts we can make the first few observations. It seems that some people got quite heavily involved in the recommendation program, and that they tended to recommend a large number of products to the same set of friends (since the number of unique edges is so small). This shows that people tend to buy more DVDs and also like to recommend them to their friends, while they seem to be more conservative with books. One possible reason is that a book is bigger time investment than a DVD: one usually needs several days to read a book, while a DVD can be viewed in a single evening. One external factor which may be affecting the recommendation patterns for DVDs is the existence of referral websites (www.dvdtalk.com). On these websites people, who want to buy a DVD and get a discount, would ask for recommendations. This way there would be recommendations made between people who don``t really know each other but rather have an economic incentive to cooperate. We were not able to find similar referral sharing sites for books or CDs. 2.3 Forward recommendations Not all people who make a purchase also decide to give recommendations. So we estimate what fraction of people that purchase also decide to recommend forward. To obtain this information we can only use the nodes with purchases that resulted in a discount. The last 3 columns of table 1 show that only about a third of the people that purchase also recommend the product forward. The ratio of forward recommendations is much higher for DVDs than for other kinds of products. Videos also have a higher ratio of forward recommendations, while books have the lowest. This shows that people are most keen on recommending movies, while more conservative when recommending books and music. Figure 1(b) shows the cumulative out-degree distribution, that is the number of people who sent out at least kp recommendations, for a product. It shows that the deeper an individual is in the cascade, if they choose to make recommendations, they tend to recommend to a greater number of people on average (the distribution has a higher variance). This effect is probably due to only very heavily recommended products producing large enough cascades to reach a certain depth. We also observe that the probability of an individual making a recommendation at all (which can only occur if they make a purchase), declines after an initial increase as one gets deeper into the cascade. 2.4 Identifying cascades As customers continue forwarding recommendations, they contribute to the formation of cascades. In order to identify cascades, i.e. the causal propagation of recommendations, we track successful recommendations as they influence purchases and further recommendations. We define a recommendation to be successful if it reached a node before its first purchase. We consider only the first purchase of an item, because there are many cases when a person made multiple 230 Group p n e eu bb be Purchases Forward Percent Book 103,161 2,863,977 5,741,611 2,097,809 65,344 17,769 65,391 15,769 24.2 DVD 19,829 805,285 8,180,393 962,341 17,232 58,189 16,459 7,336 44.6 Music 393,598 794,148 1,443,847 585,738 7,837 2,739 7,843 1,824 23.3 Video 26,131 239,583 280,270 160,683 909 467 909 250 27.6 Total 542,719 3,943,084 15,646,121 3,153,676 91,322 79,164 90,602 25,179 27.8 Table 1: Product group recommendation statistics. p: number of products, n: number of nodes, e: number of edges (recommendations), eu: number of unique edges, bb: number of buy bits, be: number of buy edges. Last 3 columns of the table: Fraction of people that purchase and also recommend forward. Purchases: number of nodes that purchased. Forward: nodes that purchased and then also recommended the product. 973 938 (a) Medical book (b) Japanese graphic novel Figure 2: Examples of two product recommendation networks: (a) First aid study guide First Aid for the USMLE Step, (b) Japanese graphic novel (manga) Oh My Goddess! : Mara Strikes Back. 10 0 10 5 10 0 10 2 10 4 10 6 10 8 Number of recommendations Count = 3.4e6 x−2.30 R2 =0.96 10 0 10 1 10 2 10 3 10 4 10 0 10 2 10 4 10 6 10 8 Number of purchases Count = 4.1e6 x−2.49 R2 =0.99 (a) Recommendations (b) Purchases Figure 3: Distribution of the number of recommendations and number of purchases made by a node. purchases of the same product, and in between those purchases she may have received new recommendations. In this case one cannot conclude that recommendations following the first purchase influenced the later purchases. Each cascade is a network consisting of customers (nodes) who purchased the same product as a result of each other``s recommendations (edges). We delete late recommendations - all incoming recommendations that happened after the first purchase of the product. This way we make the network time increasing or causal - for each node all incoming edges (recommendations) occurred before all outgoing edges. Now each connected component represents a time obeying propagation of recommendations. Figure 2 shows two typical product recommendation networks: (a) a medical study guide and (b) a Japanese graphic novel. Throughout the dataset we observe very similar patters. Most product recommendation networks consist of a large number of small disconnected components where we do not observe cascades. Then there is usually a small number of relatively small components with recommendations successfully propagating. This observation is reflected in the heavy tailed distribution of cascade sizes (see figure 4), having a power-law exponent close to 1 for DVDs in particular. We also notice bursts of recommendations (figure 2(b)). Some nodes recommend to many friends, forming a star like pattern. Figure 3 shows the distribution of the recommendations and purchases made by a single node in the recommendation network. Notice the power-law distributions and long flat tails. The most active person made 83,729 recommendations and purchased 4,416 different items. Finally, we also sometimes observe `collisions'', where nodes receive recommendations from two or more sources. A detailed enumeration and analysis of observed topological cascade patterns for this dataset is made in [10]. 2.5 The recommendation propagation model A simple model can help explain how the wide variance we observe in the number of recommendations made by individuals can lead to power-laws in cascade sizes (figure 4). The model assumes that each recipient of a recommendation will forward it to others if its value exceeds an arbitrary threshold that the individual sets for herself. Since exceeding this value is a probabilistic event, let``s call pt the probability that at time step t the recommendation exceeds the thresh231 10 0 10 1 10 2 10 0 10 2 10 4 10 6 = 1.8e6 x−4.98 R2 =0.99 10 0 10 1 10 2 10 3 10 0 10 2 10 4 = 3.4e3 x−1.56 R2 =0.83 10 0 10 1 10 2 10 0 10 2 10 4 = 4.9e5 x−6.27 R2 =0.97 10 0 10 1 10 2 10 0 10 2 10 4 = 7.8e4 x−5.87 R2 =0.97 (a) Book (b) DVD (c) Music (d) Video Figure 4: Size distribution of cascades (size of cascade vs. count). Bold line presents a power-fit. old. In that case the number of recommendations Nt+1 at time (t + 1) is given in terms of the number of recommendations at an earlier time by Nt+1 = ptNt (1) where the probability pt is defined over the unit interval. Notice that, because of the probabilistic nature of the threshold being exceeded, one can only compute the final distribution of recommendation chain lengths, which we now proceed to do. Subtracting from both sides of this equation the term Nt and diving by it we obtain N(t+1) − Nt Nt = pt − 1 (2) Summing both sides from the initial time to some very large time T and assuming that for long times the numerator is smaller than the denominator (a reasonable assumption) we get dN N = pt (3) The left hand integral is just ln(N), and the right hand side is a sum of random variables, which in the limit of a very large uncorrelated number of recommendations is normally distributed (central limit theorem). This means that the logarithm of the number of messages is normally distributed, or equivalently, that the number of messages passed is log-normally distributed. In other words the probability density for N is given by P(N) = 1 N √ 2πσ2 exp −(ln(N) − μ)2 2σ2 (4) which, for large variances describes a behavior whereby the typical number of recommendations is small (the mode of the distribution) but there are unlikely events of large chains of recommendations which are also observable. Furthermore, for large variances, the lognormal distribution can behave like a power law for a range of values. In order to see this, take the logarithms on both sides of the equation (equivalent to a log-log plot) and one obtains ln(P(N)) = − ln(N) − ln( √ 2πσ2) − (ln (N) − μ)2 2σ2 (5) So, for large σ, the last term of the right hand side goes to zero, and since the the second term is a constant one obtains a power law behavior with exponent value of minus one. There are other models which produce power-law distributions of cascade sizes, but we present ours for its simplicity, since it does not depend on network topology [6] or critical thresholds in the probability of a recommendation being accepted [18]. 3. SUCCESS OF RECOMMENDATIONS So far we only looked into the aggregate statistics of the recommendation network. Next, we ask questions about the effectiveness of recommendations in the recommendation network itself. First, we analyze the probability of purchasing as one gets more and more recommendations. Next, we measure recommendation effectiveness as two people exchange more and more recommendations. Lastly, we observe the recommendation network from the perspective of the sender of the recommendation. Does a node that makes more recommendations also influence more purchases? 3.1 Probability of buying versus number of incoming recommendations First, we examine how the probability of purchasing changes as one gets more and more recommendations. One would expect that a person is more likely to buy a product if she gets more recommendations. On the other had one would also think that there is a saturation point - if a person hasn``t bought a product after a number of recommendations, they are not likely to change their minds after receiving even more of them. So, how many recommendations are too many? Figure 5 shows the probability of purchasing a product as a function of the number of incoming recommendations on the product. As we move to higher numbers of incoming recommendations, the number of observations drops rapidly. For example, there were 5 million cases with 1 incoming recommendation on a book, and only 58 cases where a person got 20 incoming recommendations on a particular book. The maximum was 30 incoming recommendations. For these reasons we cut-off the plot when the number of observations becomes too small and the error bars too large. Figure 5(a) shows that, overall, book recommendations are rarely followed. Even more surprisingly, as more and more recommendations are received, their success decreases. We observe a peak in probability of buying at 2 incoming recommendations and then a slow drop. For DVDs (figure 5(b)) we observe a saturation around 10 incoming recommendations. This means that after a person gets 10 recommendations on a particular DVD, they become immune to them - their probability of buying does not increase anymore. The number of observations is 2.5 million at 1 incoming recommendation and 100 at 60 incoming recommendations. The maximal number of received recommendations is 172 (and that person did not buy) 232 2 4 6 8 10 0 0.01 0.02 0.03 0.04 0.05 0.06 Incoming Recommendations ProbabilityofBuying 10 20 30 40 50 60 0 0.02 0.04 0.06 0.08 Incoming Recommendations ProbabilityofBuying (a) Books (b) DVD Figure 5: Probability of buying a book (DVD) given a number of incoming recommendations. 5 10 15 20 25 30 35 40 4 6 8 10 12 x 10 −3 Exchanged recommendations Probabilityofbuying 5 10 15 20 25 30 35 40 0.02 0.03 0.04 0.05 0.06 0.07 Exchanged recommendations Probabilityofbuying (a) Books (b) DVD Figure 6: The effectiveness of recommendations with the total number of exchanged recommendations. 3.2 Success of subsequent recommendations Next, we analyze how the effectiveness of recommendations changes as two persons exchange more and more recommendations. A large number of exchanged recommendations can be a sign of trust and influence, but a sender of too many recommendations can be perceived as a spammer. A person who recommends only a few products will have her friends'' attention, but one who floods her friends with all sorts of recommendations will start to loose her influence. We measure the effectiveness of recommendations as a function of the total number of previously exchanged recommendations between the two nodes. We construct the experiment in the following way. For every recommendation r on some product p between nodes u and v, we first determine how many recommendations were exchanged between u and v before recommendation r. Then we check whether v, the recipient of recommendation, purchased p after recommendation r arrived. For the experiment we consider only node pairs (u, v), where there were at least a total of 10 recommendations sent from u to v. We perform the experiment using only recommendations from the same product group. Figure 6 shows the probability of buying as a function of the total number of exchanged recommendations between two persons up to that point. For books we observe that the effectiveness of recommendation remains about constant up to 3 exchanged recommendations. As the number of exchanged recommendations increases, the probability of buying starts to decrease to about half of the original value and then levels off. For DVDs we observe an immediate and consistent drop. This experiment shows that recommendations start to lose effect after more than two or three are passed between two people. We performed the experiment also for video and music, but the number of observations was too low and the measurements were noisy. 3.3 Success of outgoing recommendations In previous sections we examined the data from the viewpoint of the receiver of the recommendation. Now we look from the viewpoint of the sender. The two interesting questions are: how does the probability of getting a 10% credit change with the number of outgoing recommendations; and given a number of outgoing recommendations, how many purchases will they influence? One would expect that recommendations would be the most effective when recommended to the right subset of friends. If one is very selective and recommends to too few friends, then the chances of success are slim. One the other hand, recommending to everyone and spamming them with recommendations may have limited returns as well. The top row of figure 7 shows how the average number of purchases changes with the number of outgoing recommendations. For books, music, and videos the number of purchases soon saturates: it grows fast up to around 10 outgoing recommendations and then the trend either slows or starts to drop. DVDs exhibit different behavior, with the expected number of purchases increasing throughout. But if we plot the probability of getting a 10% credit as a function of the number of outgoing recommendations, as in the bottom row of figure 7, we see that the success of DVD recommendations saturates as well, while books, videos and music have qualitatively similar trends. The difference in the curves for DVD recommendations points to the presence of collisions in the dense DVD network, which has 10 recommendations per node and around 400 per product - an order of magnitude more than other product groups. This means that many different individuals are recommending to the same person, and after that person makes a purchase, even though all of them made a `successful recommendation'' 233 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 Outgoing Recommendations NumberofPurchases 20 40 60 80 100 120 140 0 1 2 3 4 5 6 7 Outgoing Recommendations NumberofPurchases 5 10 15 20 0 0.05 0.1 0.15 0.2 Outgoing Recommendations NumberofPurchases 2 4 6 8 10 12 0 0.05 0.1 0.15 0.2 0.25 Outgoing Recommendations NumberofPurchases 10 20 30 40 50 60 70 80 0 0.05 0.1 0.15 0.2 0.25 Outgoing Recommendations ProbabilityofCredit 10 20 30 40 50 60 70 80 0 0.02 0.04 0.06 0.08 0.1 0.12 Outgoing Recommendations ProbabilityofCredit 5 10 15 20 0 0.02 0.04 0.06 0.08 0.1 Outgoing Recommendations ProbabilityofCredit 2 4 6 8 10 12 14 0 0.02 0.04 0.06 0.08 Outgoing Recommendations ProbabilityofCredit (a) Books (b) DVD (c) Music (d) Video Figure 7: Top row: Number of resulting purchases given a number of outgoing recommendations. Bottom row: Probability of getting a credit given a number of outgoing recommendations. 1 2 3 4 5 6 7 > 7 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Lag [day] ProportionofPurchases 1 2 3 4 5 6 7 > 7 0 0.1 0.2 0.3 0.4 0.5 Lag [day] ProportionofPurchases (a) Books (b) DVD Figure 8: The time between the recommendation and the actual purchase. We use all purchases. by our definition, only one of them receives a credit. 4. TIMING OF RECOMMENDATIONS AND PURCHASES The recommendation referral program encourages people to purchase as soon as possible after they get a recommendation, since this maximizes the probability of getting a discount. We study the time lag between the recommendation and the purchase of different product groups, effectively how long it takes a person to both receive a recommendation, consider it, and act on it. We present the histograms of the thinking time, i.e. the difference between the time of purchase and the time the last recommendation was received for the product prior to the purchase (figure 8). We use a bin size of 1 day. Around 35%40% of book and DVD purchases occurred within a day after the last recommendation was received. For DVDs 16% purchases occur more than a week after last recommendation, while this drops to 10% for books. In contrast, if we consider the lag between the purchase and the first recommendation, only 23% of DVD purchases are made within a day, while the proportion stays the same for books. This reflects a greater likelihood for a person to receive multiple recommendations for a DVD than for a book. At the same time, DVD recommenders tend to send out many more recommendations, only one of which can result in a discount. Individuals then often miss their chance of a discount, which is reflected in the high ratio (78%) of recommended DVD purchases that did not a get discount (see table 1, columns bb and be). In contrast, for books, only 21% of purchases through recommendations did not receive a discount. We also measure the variation in intensity by time of day for three different activities in the recommendation system: recommendations (figure 9(a)), all purchases (figure 9(b)), and finally just the purchases which resulted in a discount (figure 9(c)). Each is given as a total count by hour of day. The recommendations and purchases follow the same pattern. The only small difference is that purchases reach a sharper peak in the afternoon (after 3pm Pacific Time, 6pm Eastern time). The purchases that resulted in a discount look like a negative image of the first two figures. This means that most of discounted purchases happened in the morning when the traffic (number of purchases/recommendations) on the retailer``s website was low. This makes a lot of sense since most of the recommendations happened during the day, and if the person wanted to get the discount by being the first one to purchase, she had the highest chances when the traffic on the website was the lowest. 5. RECOMMENDATION EFFECTIVENESS BY BOOK CATEGORY Social networks are a product of the contexts that bring people together. Some contexts result in social ties that are more effective at conducting an action. For example, in small world experiments, where participants attempt to reach a target individual through their chain of acquaintances, profession trumped geography, which in turn was more useful in locating a target than attributes such as religion or hobbies [9, 17]. In the context of product recommendations, we can ask whether a recommendation for a work of fiction, which may be made by any friend or neighbor, is 234 0 5 10 15 20 25 0 2 4 6 8 10 x 10 5 Hour of the Day Recommendtions 0 5 10 15 20 25 0 0.5 1 1.5 2 x 10 4 Hour of the Day AllPurchases 0 5 10 15 20 25 0 1000 2000 3000 4000 5000 6000 7000 Hour of the Day DiscountedPurchases (a) Recommendations (b) Purchases (c) Purchases with Discount Figure 9: Time of day for purchases and recommendations. (a) shows the distribution of recommendations over the day, (b) shows all purchases and (c) shows only purchases that resulted in getting discount. more or less influential than a recommendation for a technical book, which may be made by a colleague at work or school. Table 2 shows recommendation trends for all top level book categories by subject. An analysis of other product types can be found in the extended version of the paper. For clarity, we group the results by 4 different category types: fiction, personal/leisure, professional/technical, and nonfiction/other. Fiction encompasses categories such as Sci-Fi and Romance, as well as children``s and young adult books. Personal/Leisure encompasses everything from gardening, photography and cooking to health and religion. First, we compare the relative number of recommendations to reviews posted on the site (column cav/rp1 of table 2). Surprisingly, we find that the number of people making personal recommendations was only a few times greater than the number of people posting a public review on the website. We observe that fiction books have relatively few recommendations compared to the number of reviews, while professional and technical books have more recommendations than reviews. This could reflect several factors. One is that people feel more confident reviewing fiction than technical books. Another is that they hesitate to recommend a work of fiction before reading it themselves, since the recommendation must be made at the point of purchase. Yet another explanation is that the median price of a work of fiction is lower than that of a technical book. This means that the discount received for successfully recommending a mystery novel or thriller is lower and hence people have less incentive to send recommendations. Next, we measure the per category efficacy of recommendations by observing the ratio of the number of purchases occurring within a week following a recommendation to the number of recommenders for each book subject category (column b of table 2). On average, only 2% of the recommenders of a book received a discount because their recommendation was accepted, and another 1% made a recommendation that resulted in a purchase, but not a discount. We observe marked differences in the response to recommendation for different categories of books. Fiction in general is not very effectively recommended, with only around 2% of recommenders succeeding. The efficacy was a bit higher (around 3%) for non-fiction books dealing with personal and leisure pursuits, but is significantly higher in the professional and technical category. Medical books have nearly double the average rate of recommendation acceptance. This could be in part attributed to the higher median price of medical books and technical books in general. As we will see in Section 6, a higher product price increases the chance that a recommendation will be accepted. Recommendations are also more likely to be accepted for certain religious categories: 4.3% for Christian living and theology and 4.8% for Bibles. In contrast, books not tied to organized religions, such as ones on the subject of new age (2.5%) and occult (2.2%) spirituality, have lower recommendation effectiveness. These results raise the interesting possibility that individuals have greater influence over one another in an organized context, for example through a professional contact or a religious one. There are exceptions of course. For example, Japanese anime DVDs have a strong following in the US, and this is reflected in their frequency and success in recommendations. Another example is that of gardening. In general, recommendations for books relating to gardening have only a modest chance of being accepted, which agrees with the individual prerogative that accompanies this hobby. At the same time, orchid cultivation can be a highly organized and social activity, with frequent `shows'' and online communities devoted entirely to orchids. Perhaps because of this, the rate of acceptance of orchid book recommendations is twice as high as those for books on vegetable or tomato growing. 6. MODELING THE RECOMMENDATION SUCCESS We have examined the properties of recommendation network in relation to viral marketing, but one question still remains: what determines the product``s viral marketing success? We present a model which characterizes product categories for which recommendations are more likely to be accepted. We use a regression of the following product attributes to correlate them with recommendation success: • r: number of recommendations • ns: number of senders of recommendations • nr: number of recipients of recommendations • p: price of the product • v: number of reviews of the product • t: average product rating 235 category np n cc rp1 vav cav/ pm b ∗ 100 rp1 Books general 370230 2,860,714 1.87 5.28 4.32 1.41 14.95 3.12 Fiction Children``s Books 46,451 390,283 2.82 6.44 4.52 1.12 8.76 2.06** Literature & Fiction 41,682 502,179 3.06 13.09 4.30 0.57 11.87 2.82* Mystery and Thrillers 10,734 123,392 6.03 20.14 4.08 0.36 9.60 2.40** Science Fiction & Fantasy 10,008 175,168 6.17 19.90 4.15 0.64 10.39 2.34** Romance 6,317 60,902 5.65 12.81 4.17 0.52 6.99 1.78** Teens 5,857 81,260 5.72 20.52 4.36 0.41 9.56 1.94** Comics & Graphic Novels 3,565 46,564 11.70 4.76 4.36 2.03 10.47 2.30* Horror 2,773 48,321 9.35 21.26 4.16 0.44 9.60 1.81** Personal/Leisure Religion and Spirituality 43,423 441,263 1.89 3.87 4.45 1.73 9.99 3.13 Health Mind and Body 33,751 572,704 1.54 4.34 4.41 2.39 13.96 3.04 History 28,458 28,3406 2.74 4.34 4.30 1.27 18.00 2.84 Home and Garden 19,024 180,009 2.91 1.78 4.31 3.48 15.37 2.26** Entertainment 18,724 258,142 3.65 3.48 4.29 2.26 13.97 2.66* Arts and Photography 17,153 179,074 3.49 1.56 4.42 3.85 20.95 2.87 Travel 12,670 113,939 3.91 2.74 4.26 1.87 13.27 2.39** Sports 10,183 120,103 1.74 3.36 4.34 1.99 13.97 2.26** Parenting and Families 8,324 182,792 0.73 4.71 4.42 2.57 11.87 2.81 Cooking Food and Wine 7,655 146,522 3.02 3.14 4.45 3.49 13.97 2.38* Outdoors & Nature 6,413 59,764 2.23 1.93 4.42 2.50 15.00 3.05 Professional/Technical Professional & Technical 41,794 459,889 1.72 1.91 4.30 3.22 32.50 4.54** Business and Investing 29,002 476,542 1.55 3.61 4.22 2.94 20.99 3.62** Science 25,697 271,391 2.64 2.41 4.30 2.42 28.00 3.90** Computers and Internet 18,941 375,712 2.22 4.51 3.98 3.10 34.95 3.61** Medicine 16,047 175,520 1.08 1.41 4.40 4.19 39.95 5.68** Engineering 10,312 107,255 1.30 1.43 4.14 3.85 59.95 4.10** Law 5,176 53,182 2.64 1.89 4.25 2.67 24.95 3.66* Nonfiction-other Nonfiction 55,868 560,552 2.03 3.13 4.29 1.89 18.95 3.28** Reference 26,834 371,959 1.94 2.49 4.19 3.04 17.47 3.21 Biographies and Memoirs 18,233 277,356 2.80 7.65 4.34 0.90 14.00 2.96 Table 2: Statistics by book category: np:number of products in category, n number of customers, cc percentage of customers in the largest connected component, rp1 av. # reviews in 2001 - 2003, rp2 av. # reviews 1st 6 months 2005, vav average star rating, cav average number of people recommending product, cav/rp1 ratio of recommenders to reviewers, pm median price, b ratio of the number of purchases resulting from a recommendation to the number of recommenders. The symbol ** denotes statistical significance at the 0.01 level, * at the 0.05 level. From the original set of half a million products, we compute a success rate s for the 48,218 products that had at least one purchase made through a recommendation and for which a price was given. In section 5 we defined recommendation success rate s as the ratio of the total number purchases made through recommendations and the number of senders of the recommendations. We decided to use this kind of normalization, rather than normalizing by the total number of recommendations sent, in order not to penalize communities where a few individuals send out many recommendations (figure 2(b)). Since the variables follow a heavy tailed distribution, we use the following model: s = exp( i βi log(xi) + i) where xi are the product attributes (as described on previous page), and i is random error. We fit the model using least squares and obtain the coefficients βi shown on table 3. With the exception of the average rating, they are all significant. The only two attributes with a positive coefficient are the number of recommendations and price. This shows that more expensive and more recommended products have a higher success rate. The number of senders and receivers have large negative coefficients, showing that successfully recommended products are more likely to be not so widely popular. They have relatively many recommendations with a small number of senders and receivers, which suggests a very dense recommendation network where lots of recommendations were exchanged between a small community of people. These insights could be to marketers - personal recommendations are most effective in small, densely connected communities enjoying expensive products. 236 Variable Coefficient βi const -0.940 (0.025)** r 0.426 (0.013)** ns -0.782 (0.004)** nr -1.307 (0.015)** p 0.128 (0.004)** v -0.011 (0.002)** t -0.027 (0.014)* R2 0.74 Table 3: Regression using the log of the recommendation success rate, ln(s), as the dependent variable. For each coefficient we provide the standard error and the statistical significance level (**:0.01, *:0.1). 7. DISCUSSION AND CONCLUSION Although the retailer may have hoped to boost its revenues through viral marketing, the additional purchases that resulted from recommendations are just a drop in the bucket of sales that occur through the website. Nevertheless, we were able to obtain a number of interesting insights into how viral marketing works that challenge common assumptions made in epidemic and rumor propagation modeling. Firstly, it is frequently assumed in epidemic models that individuals have equal probability of being infected every time they interact. Contrary to this we observe that the probability of infection decreases with repeated interaction. Marketers should take heed that providing excessive incentives for customers to recommend products could backfire by weakening the credibility of the very same links they are trying to take advantage of. Traditional epidemic and innovation diffusion models also often assume that individuals either have a constant probability of `converting'' every time they interact with an infected individual or that they convert once the fraction of their contacts who are infected exceeds a threshold. In both cases, an increasing number of infected contacts results in an increased likelihood of infection. Instead, we find that the probability of purchasing a product increases with the number of recommendations received, but quickly saturates to a constant and relatively low probability. This means individuals are often impervious to the recommendations of their friends, and resist buying items that they do not want. In network-based epidemic models, extremely highly connected individuals play a very important role. For example, in needle sharing and sexual contact networks these nodes become the super-spreaders by infecting a large number of people. But these models assume that a high degree node has as much of a probability of infecting each of its neighbors as a low degree node does. In contrast, we find that there are limits to how influential high degree nodes are in the recommendation network. As a person sends out more and more recommendations past a certain number for a product, the success per recommendation declines. This would seem to indicate that individuals have influence over a few of their friends, but not everybody they know. We also presented a simple stochastic model that allows for the presence of relatively large cascades for a few products, but reflects well the general tendency of recommendation chains to terminate after just a short number of steps. We saw that the characteristics of product reviews and effectiveness of recommendations vary by category and price, with more successful recommendations being made on technical or religious books, which presumably are placed in the social context of a school, workplace or place of worship. Finally, we presented a model which shows that smaller and more tightly knit groups tend to be more conducive to viral marketing. So despite the relative ineffectiveness of the viral marketing program in general, we found a number of new insights which we hope will have general applicability to marketing strategies and to future models of viral information spread. 8. REFERENCES [1] Anonymous. Profiting from obscurity: What the long tail means for the economics of e-commerce. Economist, 2005. [2] E. Brynjolfsson, Y. Hu, and M. D. Smith. Consumer surplus in the digital economy: Estimating the value of increased product variety at online booksellers. Management Science, 49(11), 2003. [3] K. Burke. As consumer attitudes shift, so must marketing strategies. 2003. [4] J. Chevalier and D. Mayzlin. The effect of word of mouth on sales: Online book reviews. 2004. [5] P. Erd¨os and A. R´enyi. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci., 1960. [6] D. Gruhl, R. Guha, D. Liben-Nowell, and A. Tomkins. Information diffusion through blogspace. In WWW ``04, 2004. [7] S. Jurvetson. What exactly is viral marketing? Red Herring, 78:110-112, 2000. [8] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of infuence in a social network. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2003. [9] P. Killworth and H. Bernard. Reverse small world experiment. Social Networks, 1:159-192, 1978. [10] J. Leskovec, A. Singh, and J. Kleinberg. Patterns of influence in a recommendation network. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2006. [11] G. Linden, B. Smith, and J. York. Amazon.com recommendations: item-to-item collaborative filtering. IEEE Internet Computing, 7(1):76-80, 2003. [12] A. L. Montgomery. Applying quantitative marketing techniques to the internet. Interfaces, 30:90-108, 2001. [13] P. Resnick and R. Zeckhauser. Trust among strangers in internet transactions: Empirical analysis of ebays reputation system. In The Economics of the Internet and E-Commerce. Elsevier Science, 2002. [14] M. Richardson and P. Domingos. Mining knowledge-sharing sites for viral marketing. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2002. [15] E. M. Rogers. Diffusion of Innovations. Free Press, New York, fourth edition, 1995. [16] D. Strang and S. A. Soule. Diffusion in organizations and social movements: From hybrid corn to poison pills. Annual Review of Sociology, 24:265-290, 1998. [17] J. Travers and S. Milgram. An experimental study of the small world problem. Sociometry, 1969. [18] D. Watts. A simple model of global cascades on random networks. PNAS, 99(9):4766-5771, Apr 2002. 237
The Dynamics of Viral Marketing * ABSTRACT We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective. 1. INTRODUCTION With consumers showing increasing resistance to traditional forms of advertising such as TV or newspaper ads, marketers have turned to alternate strategies, including viral marketing. Viral marketing exploits existing social networks by encouraging customers to share product information with their friends. Previously, a few in depth studies have shown that social networks affect the adoption of * A longer version of this paper can be found at http://arxiv.org/abs/physics/0509039 tThis research was done while at HP Labs. $Research likewise done while at HP Labs. individual innovations and products (for a review see [15] or [16]). But until recently it has been difficult to measure how influential person-to-person recommendations actually are over a wide range of products. We were able to directly measure and model the effectiveness of recommendations by studying one online retailer's incentivised viral marketing program. The website gave discounts to customers recommending any of its products to others, and then tracked the resulting purchases and additional recommendations. Although word of mouth can be a powerful factor influencing purchasing decisions, it can be tricky for advertisers to tap into. Some services used by individuals to communicate are natural candidates for viral marketing, because the product can be observed or advertised as part of the communication. Email services such as Hotmail and Yahoo had very fast adoption curves because every email sent through them contained an advertisement for the service and because they were free. Hotmail spent a mere $50,000 on traditional marketing and still grew from zero to 12 million users in 18 months [7]. Google's Gmail captured a significant part of market share in spite of the fact that the only way to sign up for the service was through a referral. Most products cannot be advertised in such a direct way. At the same time the choice of products available to consumers has increased manyfold thanks to online retailers who can supply a much wider variety of products than traditional brick-and-mortar stores. Not only is the variety of products larger, but one observes a ` fat tail' phenomenon, where a large fraction of purchases are of relatively obscure items. On Amazon.com, somewhere between 20 to 40 percent of unit sales fall outside of its top 100,000 ranked products [2]. Rhapsody, a streaming-music service, streams more tracks outside than inside its top 10,000 tunes [1]. Effectively advertising these niche products using traditional advertising approaches is impractical. Therefore using more targeted marketing approaches is advantageous both to the merchant and the consumer, who would benefit from learning about new products. The problem is partly addressed by the advent of online product and merchant reviews, both at retail sites such as EBay and Amazon, and specialized product comparison sites such as Epinions and CNET. Quantitative marketing techniques have been proposed [12], and the rating of products and merchants has been shown to effect the likelihood of an item being bought [13, 4]. Of further help to the consumer are collaborative filtering recommendations of the form "people who bought x also bought y" feature [11]. These refinements help consumers discover new products and receive more accurate evaluations, but they cannot completely substitute personalized recommendations that one receives from a friend or relative. It is human nature to be more interested in what a friend buys than what an anonymous person buys, to be more likely to trust their opinion, and to be more influenced by their actions. Our friends are also acquainted with our needs and tastes, and can make appropriate recommendations. A Lucid Marketing survey found that 68% of individuals consulted friends and relatives before purchasing home electronics--more than the half who used search engines to find product information [3]. Several studies have attempted to model just this kind of network influence. Richardson and Domingos [14] used Epinions' trusted reviewer network to construct an algorithm to maximize viral marketing efficiency assuming that individuals' probability of purchasing a product depends on the opinions on the trusted peers in their network. Kempe, Kleinberg and Tardos [8] evaluate the efficiency of several algorithms for maximizing the size of influence set given various models of adoption. While these models address the question of maximizing the spread of influence in a network, they are based on assumed rather than measured influence effects. In contrast, in our study we are able to directly observe the effectiveness of person to person word of mouth advertising for hundreds of thousands of products for the first time. We find that most recommendation chains do not grow very large, often terminating with the initial purchase of a product. However, occasionally a product will propagate through a very active recommendation network. We propose a simple stochastic model that seems to explain the propagation of recommendations. Moreover, the characteristics of recommendation networks influence the purchase patterns of their members. For example, individuals' likelihood of purchasing a product initially increases as they receive additional recommendations for it, but a saturation point is quickly reached. Interestingly, as more recommendations are sent between the same two individuals, the likelihood that they will be heeded decreases. We also propose models to identify products for which viral marketing is effective: We find that the category and price of product plays a role, with recommendations of expensive products of interest to small, well connected communities resulting in a purchase more often. We also observe patterns in the timing of recommendations and purchases corresponding to times of day when people are likely to be shopping online or reading email. We report on these and other findings in the following sections. 2. THE RECOMMENDATION NETWORK 2.1 Dataset description Our analysis focuses on the recommendation referral program run by a large retailer. The program rules were as follows. Each time a person purchases a book, music, or a movie he or she is given the option of sending emails recommending the item to friends. The first person to purchase the same item through a referral link in the email gets a 10% discount. When this happens the sender of the recommendation receives a 10% credit on their purchase. The recommendation dataset consists of 15,646,121 recommendations made among 3,943,084 distinct users. The data was collected from June 5 2001 to May 16 2003. In total, 548,523 products were recommended, 99% of them belonging to 4 main product groups: Books, DVDs, Music and Videos. In addition to recommendation data, we also crawled the retailer's website to obtain product categories, reviews and ratings for all products. Of the products in our data set, 5813 (1%) were discontinued (the retailer no longer provided any information about them). Although the data gives us a detailed and accurate view of recommendation dynamics, it does have its limitations. The only indication of the success of a recommendation is the observation of the recipient purchasing the product through the same vendor. We have no way of knowing if the person had decided instead to purchase elsewhere, borrow, or otherwise obtain the product. The delivery of the recommendation is also somewhat different from one person simply telling another about a product they enjoy, possibly in the context of a broader discussion of similar products. The recommendation is received as a form email including information about the discount program. Someone reading the email might consider it spam, or at least deem it less important than a recommendation given in the context of a conversation. The recipient may also doubt whether the friend is recommending the product because they think the recipient might enjoy it, or are simply trying to get a discount for themselves. Finally, because the recommendation takes place before the recommender receives the product, it might not be based on a direct observation of the product. Nevertheless, we believe that these recommendation networks are reflective of the nature of word of mouth advertising, and give us key insights into the influence of social networks on purchasing decisions. 2.2 Recommendation network statistics For each recommendation, the dataset included the product and product price, sender ID, receiver ID, the sent date, and a buy-bit, indicating whether the recommendation resulted in a purchase and discount. The sender and receiver ID's were shadowed. We represent this data set as a directed multi graph. The nodes represent customers, and a directed edge contains all the information about the recommendation. The edge (i, j, p, t) indicates that i recommended product p to customer j at time t. The typical process generating edges in the recommendation network is as follows: a node i first buys a product p at time t and then it recommends it to nodes j1,..., jn. The j nodes can they buy the product and further recommend it. The only way for a node to recommend a product is to first buy it. Note that even if all nodes j buy a product, only the edge to the node jk that first made the purchase (within a week after the recommendation) will be marked by a buy-bit. Because the buy-bit is set only for the first person who acts on a recommendation, we identify additional purchases by the presence of outgoing recommendations for a person, since all recommendations must be preceded by a purchase. We call this type of evidence of purchase a buyedge. Note that buy-edges provide only a lower bound on the total number of purchases without discounts. It is possible for a customer to not be the first to act on a recommendation and also to not recommend the product to others. Unfortunately, this was not recorded in the data set. We consider, however, the buy-bits and buy-edges as proxies for the total number of purchases through recommendations. For each product group we took recommendations on all products from the group and created a network. Table 1 Figure 1: (a) The size of the largest connected component of customers over time. The inset shows the linear growth in the number of customers n over time. (b) The number of recommendations sent by a user with each curve representing a different depth of the user in the recommendation chain. A power law exponent γ is fitted to all but the tail. (first 7 columns) shows the sizes of various product group recommendation networks with p being the total number of products in the product group, n the total number of nodes spanned by the group recommendation network and e the number of edges (recommendations). The column eu shows the number of unique edges--disregarding multiple recommendations between the same source and recipient. In terms of the number of different items, there are by far the most music CDs, followed by books and videos. There is a surprisingly small number of DVD titles. On the other hand, DVDs account for more half of all recommendations in the dataset. The DVD network is also the most dense, having about 10 recommendations per node, while books and music have about 2 recommendations per node and videos have only a bit more than 1 recommendation per node. Music recommendations reached about the same number of people as DVDs but used more than 5 times fewer recommendations to achieve the same coverage of the nodes. Book recommendations reached by far the most people--2.8 million. Notice that all networks have a very small number of unique edges. For books, videos and music the number of unique edges is smaller than the number of nodes--this suggests that the networks are highly disconnected [5]. Figure 1 (a) shows the fraction of nodes in largest weakly connected component over time. Notice the component is very small. Even if we compose a network using all the recommendations in the dataset, the largest connected component contains less than 2.5% (100,420) of the nodes, and the second largest component has only 600 nodes. Still, some smaller communities, numbering in the tens of thousands of purchasers of DVDs in categories such as westerns, classics and Japanese animated films (anime), had connected components spanning about 20% of their members. The insert in figure 1 (a) shows the growth of the customer base over time. Surprisingly it was linear, adding on average 165,000 new users each month, which is an indication that the service itself was not spreading epidemically. Further evidence of non-viral spread is provided by the relatively high percentage (94%) of users who made their first recommendation without having previously received one. Back to table 1: given the total number of recommendations e and purchases (bb + be) influenced by recommendations we can estimate how many recommendations need to be independently sent over the network to induce a new purchase. Using this metric books have the most influential recommendations followed by DVDs and music. For books one out of 69 recommendations resulted in a purchase. For DVDs it increases to 108 recommendations per purchase and further increases to 136 for music and 203 for video. Even with these simple counts we can make the first few observations. It seems that some people got quite heavily involved in the recommendation program, and that they tended to recommend a large number of products to the same set of friends (since the number of unique edges is so small). This shows that people tend to buy more DVDs and also like to recommend them to their friends, while they seem to be more conservative with books. One possible reason is that a book is bigger time investment than a DVD: one usually needs several days to read a book, while a DVD can be viewed in a single evening. One external factor which may be affecting the recommendation patterns for DVDs is the existence of referral websites (www.dvdtalk.com). On these websites people, who want to buy a DVD and get a discount, would ask for recommendations. This way there would be recommendations made between people who don't really know each other but rather have an economic incentive to cooperate. We were not able to find similar referral sharing sites for books or CDs. 2.3 Forward recommendations Not all people who make a purchase also decide to give recommendations. So we estimate what fraction of people that purchase also decide to recommend forward. To obtain this information we can only use the nodes with purchases that resulted in a discount. The last 3 columns of table 1 show that only about a third of the people that purchase also recommend the product forward. The ratio of forward recommendations is much higher for DVDs than for other kinds of products. Videos also have a higher ratio of forward recommendations, while books have the lowest. This shows that people are most keen on recommending movies, while more conservative when recommending books and music. Figure 1 (b) shows the cumulative out-degree distribution, that is the number of people who sent out at least kp recommendations, for a product. It shows that the deeper an individual is in the cascade, if they choose to make recommendations, they tend to recommend to a greater number of people on average (the distribution has a higher variance). This effect is probably due to only very heavily recommended products producing large enough cascades to reach a certain depth. We also observe that the probability of an individual making a recommendation at all (which can only occur if they make a purchase), declines after an initial increase as one gets deeper into the cascade. 2.4 Identifying cascades As customers continue forwarding recommendations, they contribute to the formation of cascades. In order to identify cascades, i.e. the "causal" propagation of recommendations, we track successful recommendations as they influence purchases and further recommendations. We define a recommendation to be successful if it reached a node before its first purchase. We consider only the first purchase of an item, because there are many cases when a person made multiple level 0 γ = 2.6 level 1 γ = 2.0 level 2 γ = 1.5 level 3 γ = 1.2 level 4 γ = 1.2 Table 1: Product group recommendation statistics. p: number of products, n: number of nodes, e: number of edges (recommendations), eu: number of unique edges, bb: number of buy bits, be: number of buy edges. Last 3 columns of the table: Fraction of people that purchase and also recommend forward. Purchases: number of nodes that purchased. Forward: nodes that purchased and then also recommended the product. Figure 2: Examples of two product recommendation networks: (a) First aid study guide First Aid for the Figure 3: Distribution of the number of recommendations and number of purchases made by a node. purchases of the same product, and in between those purchases she may have received new recommendations. In this case one cannot conclude that recommendations following the first purchase influenced the later purchases. Each cascade is a network consisting of customers (nodes) who purchased the same product as a result of each other's recommendations (edges). We delete late recommendations--all incoming recommendations that happened after the first purchase of the product. This way we make the network time increasing or causal--for each node all incoming edges (recommendations) occurred before all outgoing edges. Now each connected component represents a time obeying propagation of recommendations. Figure 2 shows two typical product recommendation networks: (a) a medical study guide and (b) a Japanese graphic novel. Throughout the dataset we observe very similar patters. Most product recommendation networks consist of a large number of small disconnected components where we do not observe cascades. Then there is usually a small number of relatively small components with recommendations successfully propagating. This observation is reflected in the heavy tailed distribution of cascade sizes (see figure 4), having a power-law exponent close to 1 for DVDs in particular. We also notice bursts of recommendations (figure 2 (b)). Some nodes recommend to many friends, forming a star like pattern. Figure 3 shows the distribution of the recommendations and purchases made by a single node in the recommendation network. Notice the power-law distributions and long flat tails. The most active person made 83,729 recommendations and purchased 4,416 different items. Finally, we also sometimes observe ` collisions', where nodes receive recommendations from two or more sources. A detailed enumeration and analysis of observed topological cascade patterns for this dataset is made in [10]. 2.5 The recommendation propagation model A simple model can help explain how the wide variance we observe in the number of recommendations made by individuals can lead to power-laws in cascade sizes (figure 4). The model assumes that each recipient of a recommendation will forward it to others if its value exceeds an arbitrary threshold that the individual sets for herself. Since exceeding this value is a probabilistic event, let's call pt the probability that at time step t the recommendation exceeds the thresh Figure 4: Size distribution of cascades (size of cascade vs. count). Bold line presents a power-fit. old. In that case the number of recommendations Nt +1 at time (t + 1) is given in terms of the number of recommendations at an earlier time by where the probability pt is defined over the unit interval. Notice that, because of the probabilistic nature of the threshold being exceeded, one can only compute the final distribution of recommendation chain lengths, which we now proceed to do. Subtracting from both sides of this equation the term Nt and diving by it we obtain Summing both sides from the initial time to some very large time T and assuming that for long times the numerator is smaller than the denominator (a reasonable assumption) we get The left hand integral is just ln (N), and the right hand side is a sum of random variables, which in the limit of a very large uncorrelated number of recommendations is normally distributed (central limit theorem). This means that the logarithm of the number of messages is normally distributed, or equivalently, that the number of messages passed is log-normally distributed. In other words the probability density for N is given by 2 ~ 2 which, for large variances describes a behavior whereby the typical number of recommendations is small (the mode of the distribution) but there are unlikely events of large chains of recommendations which are also observable. Furthermore, for large variances, the lognormal distribution can behave like a power law for a range of values. In order to see this, take the logarithms on both sides of the equation (equivalent to a log-log plot) and one obtains 2 ~ 2 So, for large ~, the last term of the right hand side goes to zero, and since the the second term is a constant one obtains a power law behavior with exponent value of minus one. There are other models which produce power-law distributions of cascade sizes, but we present ours for its simplicity, since it does not depend on network topology [6] or critical thresholds in the probability of a recommendation being accepted [18]. 3. SUCCESS OF RECOMMENDATIONS So far we only looked into the aggregate statistics of the recommendation network. Next, we ask questions about the effectiveness of recommendations in the recommendation network itself. First, we analyze the probability of purchasing as one gets more and more recommendations. Next, we measure recommendation effectiveness as two people exchange more and more recommendations. Lastly, we observe the recommendation network from the perspective of the sender of the recommendation. Does a node that makes more recommendations also influence more purchases? 3.1 Probability of buying versus number of incoming recommendations First, we examine how the probability of purchasing changes as one gets more and more recommendations. One would expect that a person is more likely to buy a product if she gets more recommendations. On the other had one would also think that there is a saturation point--if a person hasn't bought a product after a number of recommendations, they are not likely to change their minds after receiving even more of them. So, how many recommendations are too many? Figure 5 shows the probability of purchasing a product as a function of the number of incoming recommendations on the product. As we move to higher numbers of incoming recommendations, the number of observations drops rapidly. For example, there were 5 million cases with 1 incoming recommendation on a book, and only 58 cases where a person got 20 incoming recommendations on a particular book. The maximum was 30 incoming recommendations. For these reasons we cut-off the plot when the number of observations becomes too small and the error bars too large. Figure 5 (a) shows that, overall, book recommendations are rarely followed. Even more surprisingly, as more and more recommendations are received, their success decreases. We observe a peak in probability of buying at 2 incoming recommendations and then a slow drop. For DVDs (figure 5 (b)) we observe a saturation around 10 incoming recommendations. This means that after a person gets 10 recommendations on a particular DVD, they become immune to them--their probability of buying does not increase anymore. The number of observations is 2.5 million at 1 incoming recommendation and 100 at 60 incoming recommendations. The maximal number of received recommendations is 172 (and that person did not buy) Figure 5: Probability of buying a book (DVD) given a number of incoming recommendations. changed recommendations increases, the probability of buying starts to decrease to about half of the original value and then levels off. For DVDs we observe an immediate and consistent drop. This experiment shows that recommendations start to lose effect after more than two or three are passed between two people. We performed the experiment also for video and music, but the number of observations was too low and the measurements were noisy. Figure 6: The effectiveness of recommendations with the total number of exchanged recommendations. 3.2 Success of subsequent recommendations Next, we analyze how the effectiveness of recommendations changes as two persons exchange more and more recommendations. A large number of exchanged recommendations can be a sign of trust and influence, but a sender of too many recommendations can be perceived as a spammer. A person who recommends only a few products will have her friends' attention, but one who floods her friends with all sorts of recommendations will start to loose her influence. We measure the effectiveness of recommendations as a function of the total number of previously exchanged recommendations between the two nodes. We construct the experiment in the following way. For every recommendation r on some product p between nodes u and v, we first determine how many recommendations were exchanged between u and v before recommendation r. Then we check whether v, the recipient of recommendation, purchased p after recommendation r arrived. For the experiment we consider only node pairs (u, v), where there were at least a total of 10 recommendations sent from u to v. We perform the experiment using only recommendations from the same product group. Figure 6 shows the probability of buying as a function of the total number of exchanged recommendations between two persons up to that point. For books we observe that the effectiveness of recommendation remains about constant up to 3 exchanged recommendations. As the number of ex 3.3 Success of outgoing recommendations In previous sections we examined the data from the viewpoint of the receiver of the recommendation. Now we look from the viewpoint of the sender. The two interesting questions are: how does the probability of getting a 10% credit change with the number of outgoing recommendations; and given a number of outgoing recommendations, how many purchases will they influence? One would expect that recommendations would be the most effective when recommended to the right subset of friends. If one is very selective and recommends to too few friends, then the chances of success are slim. One the other hand, recommending to everyone and spamming them with recommendations may have limited returns as well. The top row of figure 7 shows how the average number of purchases changes with the number of outgoing recommendations. For books, music, and videos the number of purchases soon saturates: it grows fast up to around 10 outgoing recommendations and then the trend either slows or starts to drop. DVDs exhibit different behavior, with the expected number of purchases increasing throughout. But if we plot the probability of getting a 10% credit as a function of the number of outgoing recommendations, as in the bottom row of figure 7, we see that the success of DVD recommendations saturates as well, while books, videos and music have qualitatively similar trends. The difference in the curves for DVD recommendations points to the presence of collisions in the dense DVD network, which has 10 recommendations per node and around 400 per product--an order of magnitude more than other product groups. This means that many different individuals are recommending to the same person, and after that person makes a purchase, even though all of them made a ` successful recommendation ' Figure 7: Top row: Number of resulting purchases given a number of outgoing recommendations. Bottom row: Probability of getting a credit given a number of outgoing recommendations. Figure 8: The time between the recommendation and the actual purchase. We use all purchases. by our definition, only one of them receives a credit. 4. TIMING OF RECOMMENDATIONS AND PURCHASES The recommendation referral program encourages people to purchase as soon as possible after they get a recommendation, since this maximizes the probability of getting a discount. We study the time lag between the recommendation and the purchase of different product groups, effectively how long it takes a person to both receive a recommendation, consider it, and act on it. We present the histograms of the "thinking time", i.e. the difference between the time of purchase and the time the last recommendation was received for the product prior to the purchase (figure 8). We use a bin size of 1 day. Around 35% 40% of book and DVD purchases occurred within a day after the last recommendation was received. For DVDs 16% purchases occur more than a week after last recommendation, while this drops to 10% for books. In contrast, if we consider the lag between the purchase and the first recommendation, only 23% of DVD purchases are made within a day, while the proportion stays the same for books. This reflects a greater likelihood for a person to receive multiple recommendations for a DVD than for a book. At the same time, DVD recommenders tend to send out many more recommendations, only one of which can result in a discount. Individuals then often miss their chance of a discount, which is reflected in the high ratio (78%) of recommended DVD purchases that did not a get discount (see table 1, columns bb and be). In contrast, for books, only 21% of purchases through recommendations did not receive a discount. We also measure the variation in intensity by time of day for three different activities in the recommendation system: recommendations (figure 9 (a)), all purchases (figure 9 (b)), and finally just the purchases which resulted in a discount (figure 9 (c)). Each is given as a total count by hour of day. The recommendations and purchases follow the same pattern. The only small difference is that purchases reach a sharper peak in the afternoon (after 3pm Pacific Time, 6pm Eastern time). The purchases that resulted in a discount look like a negative image of the first two figures. This means that most of discounted purchases happened in the morning when the traffic (number of purchases/recommendations) on the retailer's website was low. This makes a lot of sense since most of the recommendations happened during the day, and if the person wanted to get the discount by being the first one to purchase, she had the highest chances when the traffic on the website was the lowest. 5. RECOMMENDATION EFFECTIVENESS BY BOOK CATEGORY Social networks are a product of the contexts that bring people together. Some contexts result in social ties that are more effective at conducting an action. For example, in small world experiments, where participants attempt to reach a target individual through their chain of acquaintances, profession trumped geography, which in turn was more useful in locating a target than attributes such as religion or hobbies [9, 17]. In the context of product recommendations, we can ask whether a recommendation for a work of fiction, which may be made by any friend or neighbor, is Figure 9: Time of day for purchases and recommendations. (a) shows the distribution of recommendations over the day, (b) shows all purchases and (c) shows only purchases that resulted in getting discount. more or less influential than a recommendation for a technical book, which may be made by a colleague at work or school. Table 2 shows recommendation trends for all top level book categories by subject. An analysis of other product types can be found in the extended version of the paper. For clarity, we group the results by 4 different category types: fiction, personal/leisure, professional/technical, and nonfiction/other. Fiction encompasses categories such as Sci-Fi and Romance, as well as children's and young adult books. Personal/Leisure encompasses everything from gardening, photography and cooking to health and religion. First, we compare the relative number of recommendations to reviews posted on the site (column cav/rp1 of table 2). Surprisingly, we find that the number of people making personal recommendations was only a few times greater than the number of people posting a public review on the website. We observe that fiction books have relatively few recommendations compared to the number of reviews, while professional and technical books have more recommendations than reviews. This could reflect several factors. One is that people feel more confident reviewing fiction than technical books. Another is that they hesitate to recommend a work of fiction before reading it themselves, since the recommendation must be made at the point of purchase. Yet another explanation is that the median price of a work of fiction is lower than that of a technical book. This means that the discount received for successfully recommending a mystery novel or thriller is lower and hence people have less incentive to send recommendations. Next, we measure the per category efficacy of recommendations by observing the ratio of the number of purchases occurring within a week following a recommendation to the number of recommenders for each book subject category (column b of table 2). On average, only 2% of the recommenders of a book received a discount because their recommendation was accepted, and another 1% made a recommendation that resulted in a purchase, but not a discount. We observe marked differences in the response to recommendation for different categories of books. Fiction in general is not very effectively recommended, with only around 2% of recommenders succeeding. The efficacy was a bit higher (around 3%) for non-fiction books dealing with personal and leisure pursuits, but is significantly higher in the professional and technical category. Medical books have nearly double the average rate of recommendation acceptance. This could be in part attributed to the higher median price of medical books and technical books in general. As we will see in Section 6, a higher product price increases the chance that a recommendation will be accepted. Recommendations are also more likely to be accepted for certain religious categories: 4.3% for Christian living and theology and 4.8% for Bibles. In contrast, books not tied to organized religions, such as ones on the subject of new age (2.5%) and occult (2.2%) spirituality, have lower recommendation effectiveness. These results raise the interesting possibility that individuals have greater influence over one another in an organized context, for example through a professional contact or a religious one. There are exceptions of course. For example, Japanese anime DVDs have a strong following in the US, and this is reflected in their frequency and success in recommendations. Another example is that of gardening. In general, recommendations for books relating to gardening have only a modest chance of being accepted, which agrees with the individual prerogative that accompanies this hobby. At the same time, orchid cultivation can be a highly organized and social activity, with frequent ` shows' and online communities devoted entirely to orchids. Perhaps because of this, the rate of acceptance of orchid book recommendations is twice as high as those for books on vegetable or tomato growing. 6. MODELING THE RECOMMENDATION SUCCESS We have examined the properties of recommendation network in relation to viral marketing, but one question still remains: what determines the product's viral marketing success? We present a model which characterizes product categories for which recommendations are more likely to be accepted. We use a regression of the following product attributes to correlate them with recommendation success: • r: number of recommendations • ns: number of senders of recommendations • nr: number of recipients of recommendations • p: price of the product • v: number of reviews of the product • t: average product rating Table 2: Statistics by book category: np: number of products in category, n number of customers, cc percentage of customers in the largest connected component, rp1 av. #reviews in 2001--2003, rp2 av. #reviews ratio of recommenders to reviewers, pm median price, b ratio of the number of purchases resulting from a recommendation to the number of recommenders. The symbol ** denotes statistical significance at the 0.01 level, * at the 0.05 level. From the original set of half a million products, we compute a success rate s for the 48,218 products that had at least one purchase made through a recommendation and for which a price was given. In section 5 we defined recommendation success rate s as the ratio of the total number purchases made through recommendations and the number of senders of the recommendations. We decided to use this kind of normalization, rather than normalizing by the total number of recommendations sent, in order not to penalize communities where a few individuals send out many recommendations (figure 2 (b)). Since the variables follow a heavy tailed distribution, we use the following model: where xi are the product attributes (as described on previous page), and Ei is random error. We fit the model using least squares and obtain the coefficients βi shown on table 3. With the exception of the average rating, they are all significant. The only two attributes with a positive coefficient are the number of recommendations and price. This shows that more expensive and more recommended products have a higher success rate. The number of senders and receivers have large negative coefficients, showing that successfully recommended products are more likely to be not so widely popular. They have relatively many recommendations with a small number of senders and receivers, which suggests a very dense recommendation network where lots of recommendations were exchanged between a small community of people. These insights could be to marketers--personal recommendations are most effective in small, densely connected communities enjoying expensive products. Table 3: Regression using the log of the recommen dation success rate, ln (s), as the dependent variable. For each coefficient we provide the standard error and the statistical significance level (**:0.01, *:0.1). 7. DISCUSSION AND CONCLUSION Although the retailer may have hoped to boost its revenues through viral marketing, the additional purchases that resulted from recommendations are just a drop in the bucket of sales that occur through the website. Nevertheless, we were able to obtain a number of interesting insights into how viral marketing works that challenge common assumptions made in epidemic and rumor propagation modeling. Firstly, it is frequently assumed in epidemic models that individuals have equal probability of being infected every time they interact. Contrary to this we observe that the probability of infection decreases with repeated interaction. Marketers should take heed that providing excessive incentives for customers to recommend products could backfire by weakening the credibility of the very same links they are trying to take advantage of. Traditional epidemic and innovation diffusion models also often assume that individuals either have a constant probability of ` converting' every time they interact with an infected individual or that they convert once the fraction of their contacts who are infected exceeds a threshold. In both cases, an increasing number of infected contacts results in an increased likelihood of infection. Instead, we find that the probability of purchasing a product increases with the number of recommendations received, but quickly saturates to a constant and relatively low probability. This means individuals are often impervious to the recommendations of their friends, and resist buying items that they do not want. In network-based epidemic models, extremely highly connected individuals play a very important role. For example, in needle sharing and sexual contact networks these nodes become the "super-spreaders" by infecting a large number of people. But these models assume that a high degree node has as much of a probability of infecting each of its neighbors as a low degree node does. In contrast, we find that there are limits to how influential high degree nodes are in the recommendation network. As a person sends out more and more recommendations past a certain number for a product, the success per recommendation declines. This would seem to indicate that individuals have influence over a few of their friends, but not everybody they know. We also presented a simple stochastic model that allows for the presence of relatively large cascades for a few products, but reflects well the general tendency of recommendation chains to terminate after just a short number of steps. We saw that the characteristics of product reviews and effectiveness of recommendations vary by category and price, with more successful recommendations being made on technical or religious books, which presumably are placed in the social context of a school, workplace or place of worship. Finally, we presented a model which shows that smaller and more tightly knit groups tend to be more conducive to viral marketing. So despite the relative ineffectiveness of the viral marketing program in general, we found a number of new insights which we hope will have general applicability to marketing strategies and to future models of viral information spread.
The Dynamics of Viral Marketing * ABSTRACT We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective. 1. INTRODUCTION With consumers showing increasing resistance to traditional forms of advertising such as TV or newspaper ads, marketers have turned to alternate strategies, including viral marketing. Viral marketing exploits existing social networks by encouraging customers to share product information with their friends. Previously, a few in depth studies have shown that social networks affect the adoption of * A longer version of this paper can be found at http://arxiv.org/abs/physics/0509039 tThis research was done while at HP Labs. $Research likewise done while at HP Labs. individual innovations and products (for a review see [15] or [16]). But until recently it has been difficult to measure how influential person-to-person recommendations actually are over a wide range of products. We were able to directly measure and model the effectiveness of recommendations by studying one online retailer's incentivised viral marketing program. The website gave discounts to customers recommending any of its products to others, and then tracked the resulting purchases and additional recommendations. Although word of mouth can be a powerful factor influencing purchasing decisions, it can be tricky for advertisers to tap into. Some services used by individuals to communicate are natural candidates for viral marketing, because the product can be observed or advertised as part of the communication. Email services such as Hotmail and Yahoo had very fast adoption curves because every email sent through them contained an advertisement for the service and because they were free. Hotmail spent a mere $50,000 on traditional marketing and still grew from zero to 12 million users in 18 months [7]. Google's Gmail captured a significant part of market share in spite of the fact that the only way to sign up for the service was through a referral. Most products cannot be advertised in such a direct way. At the same time the choice of products available to consumers has increased manyfold thanks to online retailers who can supply a much wider variety of products than traditional brick-and-mortar stores. Not only is the variety of products larger, but one observes a ` fat tail' phenomenon, where a large fraction of purchases are of relatively obscure items. On Amazon.com, somewhere between 20 to 40 percent of unit sales fall outside of its top 100,000 ranked products [2]. Rhapsody, a streaming-music service, streams more tracks outside than inside its top 10,000 tunes [1]. Effectively advertising these niche products using traditional advertising approaches is impractical. Therefore using more targeted marketing approaches is advantageous both to the merchant and the consumer, who would benefit from learning about new products. The problem is partly addressed by the advent of online product and merchant reviews, both at retail sites such as EBay and Amazon, and specialized product comparison sites such as Epinions and CNET. Quantitative marketing techniques have been proposed [12], and the rating of products and merchants has been shown to effect the likelihood of an item being bought [13, 4]. Of further help to the consumer are collaborative filtering recommendations of the form "people who bought x also bought y" feature [11]. These refinements help consumers discover new products and receive more accurate evaluations, but they cannot completely substitute personalized recommendations that one receives from a friend or relative. It is human nature to be more interested in what a friend buys than what an anonymous person buys, to be more likely to trust their opinion, and to be more influenced by their actions. Our friends are also acquainted with our needs and tastes, and can make appropriate recommendations. A Lucid Marketing survey found that 68% of individuals consulted friends and relatives before purchasing home electronics--more than the half who used search engines to find product information [3]. Several studies have attempted to model just this kind of network influence. Richardson and Domingos [14] used Epinions' trusted reviewer network to construct an algorithm to maximize viral marketing efficiency assuming that individuals' probability of purchasing a product depends on the opinions on the trusted peers in their network. Kempe, Kleinberg and Tardos [8] evaluate the efficiency of several algorithms for maximizing the size of influence set given various models of adoption. While these models address the question of maximizing the spread of influence in a network, they are based on assumed rather than measured influence effects. In contrast, in our study we are able to directly observe the effectiveness of person to person word of mouth advertising for hundreds of thousands of products for the first time. We find that most recommendation chains do not grow very large, often terminating with the initial purchase of a product. However, occasionally a product will propagate through a very active recommendation network. We propose a simple stochastic model that seems to explain the propagation of recommendations. Moreover, the characteristics of recommendation networks influence the purchase patterns of their members. For example, individuals' likelihood of purchasing a product initially increases as they receive additional recommendations for it, but a saturation point is quickly reached. Interestingly, as more recommendations are sent between the same two individuals, the likelihood that they will be heeded decreases. We also propose models to identify products for which viral marketing is effective: We find that the category and price of product plays a role, with recommendations of expensive products of interest to small, well connected communities resulting in a purchase more often. We also observe patterns in the timing of recommendations and purchases corresponding to times of day when people are likely to be shopping online or reading email. We report on these and other findings in the following sections. 2. THE RECOMMENDATION NETWORK 2.1 Dataset description 2.2 Recommendation network statistics 2.3 Forward recommendations 2.4 Identifying cascades 2.5 The recommendation propagation model 3. SUCCESS OF RECOMMENDATIONS 3.1 Probability of buying versus number of incoming recommendations 3.2 Success of subsequent recommendations 3.3 Success of outgoing recommendations 4. TIMING OF RECOMMENDATIONS AND PURCHASES 5. RECOMMENDATION EFFECTIVENESS BY BOOK CATEGORY 6. MODELING THE RECOMMENDATION SUCCESS 7. DISCUSSION AND CONCLUSION Although the retailer may have hoped to boost its revenues through viral marketing, the additional purchases that resulted from recommendations are just a drop in the bucket of sales that occur through the website. Nevertheless, we were able to obtain a number of interesting insights into how viral marketing works that challenge common assumptions made in epidemic and rumor propagation modeling. Firstly, it is frequently assumed in epidemic models that individuals have equal probability of being infected every time they interact. Contrary to this we observe that the probability of infection decreases with repeated interaction. Marketers should take heed that providing excessive incentives for customers to recommend products could backfire by weakening the credibility of the very same links they are trying to take advantage of. Traditional epidemic and innovation diffusion models also often assume that individuals either have a constant probability of ` converting' every time they interact with an infected individual or that they convert once the fraction of their contacts who are infected exceeds a threshold. In both cases, an increasing number of infected contacts results in an increased likelihood of infection. Instead, we find that the probability of purchasing a product increases with the number of recommendations received, but quickly saturates to a constant and relatively low probability. This means individuals are often impervious to the recommendations of their friends, and resist buying items that they do not want. In network-based epidemic models, extremely highly connected individuals play a very important role. For example, in needle sharing and sexual contact networks these nodes become the "super-spreaders" by infecting a large number of people. But these models assume that a high degree node has as much of a probability of infecting each of its neighbors as a low degree node does. In contrast, we find that there are limits to how influential high degree nodes are in the recommendation network. As a person sends out more and more recommendations past a certain number for a product, the success per recommendation declines. This would seem to indicate that individuals have influence over a few of their friends, but not everybody they know. We also presented a simple stochastic model that allows for the presence of relatively large cascades for a few products, but reflects well the general tendency of recommendation chains to terminate after just a short number of steps. We saw that the characteristics of product reviews and effectiveness of recommendations vary by category and price, with more successful recommendations being made on technical or religious books, which presumably are placed in the social context of a school, workplace or place of worship. Finally, we presented a model which shows that smaller and more tightly knit groups tend to be more conducive to viral marketing. So despite the relative ineffectiveness of the viral marketing program in general, we found a number of new insights which we hope will have general applicability to marketing strategies and to future models of viral information spread.
The Dynamics of Viral Marketing * ABSTRACT We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective. 1. INTRODUCTION With consumers showing increasing resistance to traditional forms of advertising such as TV or newspaper ads, marketers have turned to alternate strategies, including viral marketing. Viral marketing exploits existing social networks by encouraging customers to share product information with their friends. Previously, a few in depth studies have shown that social networks affect the adoption of * A longer version of this paper can be found at http://arxiv.org/abs/physics/0509039 tThis research was done while at HP Labs. $Research likewise done while at HP Labs. individual innovations and products (for a review see [15] or [16]). But until recently it has been difficult to measure how influential person-to-person recommendations actually are over a wide range of products. We were able to directly measure and model the effectiveness of recommendations by studying one online retailer's incentivised viral marketing program. The website gave discounts to customers recommending any of its products to others, and then tracked the resulting purchases and additional recommendations. Some services used by individuals to communicate are natural candidates for viral marketing, because the product can be observed or advertised as part of the communication. Hotmail spent a mere $50,000 on traditional marketing and still grew from zero to 12 million users in 18 months [7]. Most products cannot be advertised in such a direct way. At the same time the choice of products available to consumers has increased manyfold thanks to online retailers who can supply a much wider variety of products than traditional brick-and-mortar stores. Not only is the variety of products larger, but one observes a ` fat tail' phenomenon, where a large fraction of purchases are of relatively obscure items. On Amazon.com, somewhere between 20 to 40 percent of unit sales fall outside of its top 100,000 ranked products [2]. Effectively advertising these niche products using traditional advertising approaches is impractical. Therefore using more targeted marketing approaches is advantageous both to the merchant and the consumer, who would benefit from learning about new products. Quantitative marketing techniques have been proposed [12], and the rating of products and merchants has been shown to effect the likelihood of an item being bought [13, 4]. Of further help to the consumer are collaborative filtering recommendations of the form "people who bought x also bought y" feature [11]. These refinements help consumers discover new products and receive more accurate evaluations, but they cannot completely substitute personalized recommendations that one receives from a friend or relative. Our friends are also acquainted with our needs and tastes, and can make appropriate recommendations. A Lucid Marketing survey found that 68% of individuals consulted friends and relatives before purchasing home electronics--more than the half who used search engines to find product information [3]. Several studies have attempted to model just this kind of network influence. Richardson and Domingos [14] used Epinions' trusted reviewer network to construct an algorithm to maximize viral marketing efficiency assuming that individuals' probability of purchasing a product depends on the opinions on the trusted peers in their network. Kempe, Kleinberg and Tardos [8] evaluate the efficiency of several algorithms for maximizing the size of influence set given various models of adoption. While these models address the question of maximizing the spread of influence in a network, they are based on assumed rather than measured influence effects. In contrast, in our study we are able to directly observe the effectiveness of person to person word of mouth advertising for hundreds of thousands of products for the first time. We find that most recommendation chains do not grow very large, often terminating with the initial purchase of a product. However, occasionally a product will propagate through a very active recommendation network. We propose a simple stochastic model that seems to explain the propagation of recommendations. Moreover, the characteristics of recommendation networks influence the purchase patterns of their members. For example, individuals' likelihood of purchasing a product initially increases as they receive additional recommendations for it, but a saturation point is quickly reached. Interestingly, as more recommendations are sent between the same two individuals, the likelihood that they will be heeded decreases. We also propose models to identify products for which viral marketing is effective: We find that the category and price of product plays a role, with recommendations of expensive products of interest to small, well connected communities resulting in a purchase more often. We also observe patterns in the timing of recommendations and purchases corresponding to times of day when people are likely to be shopping online or reading email. 7. DISCUSSION AND CONCLUSION Although the retailer may have hoped to boost its revenues through viral marketing, the additional purchases that resulted from recommendations are just a drop in the bucket of sales that occur through the website. Nevertheless, we were able to obtain a number of interesting insights into how viral marketing works that challenge common assumptions made in epidemic and rumor propagation modeling. Firstly, it is frequently assumed in epidemic models that individuals have equal probability of being infected every time they interact. Contrary to this we observe that the probability of infection decreases with repeated interaction. Traditional epidemic and innovation diffusion models also often assume that individuals either have a constant probability of ` converting' every time they interact with an infected individual or that they convert once the fraction of their contacts who are infected exceeds a threshold. In both cases, an increasing number of infected contacts results in an increased likelihood of infection. Instead, we find that the probability of purchasing a product increases with the number of recommendations received, but quickly saturates to a constant and relatively low probability. This means individuals are often impervious to the recommendations of their friends, and resist buying items that they do not want. In network-based epidemic models, extremely highly connected individuals play a very important role. For example, in needle sharing and sexual contact networks these nodes become the "super-spreaders" by infecting a large number of people. But these models assume that a high degree node has as much of a probability of infecting each of its neighbors as a low degree node does. In contrast, we find that there are limits to how influential high degree nodes are in the recommendation network. As a person sends out more and more recommendations past a certain number for a product, the success per recommendation declines. This would seem to indicate that individuals have influence over a few of their friends, but not everybody they know. We also presented a simple stochastic model that allows for the presence of relatively large cascades for a few products, but reflects well the general tendency of recommendation chains to terminate after just a short number of steps. Finally, we presented a model which shows that smaller and more tightly knit groups tend to be more conducive to viral marketing. So despite the relative ineffectiveness of the viral marketing program in general, we found a number of new insights which we hope will have general applicability to marketing strategies and to future models of viral information spread.
C-66
Heuristics-Based Scheduling of Composite Web Service Workloads
Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems. Although industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow's web service requests to actual service provisioning in a multi-tiered, multi-organisation environment. This issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits. Because these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement. In this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised. We show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches.
[ "heurist", "schedul", "web servic", "streamlin function", "end-to-end workflow composit", "servic request", "multi-organis environ", "schedul servic", "busi process workflow", "busi valu metric", "schedul agent", "multi-tier system", "qo-defin limit", "qo", "workflow" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "M", "U", "U" ]
Heuristics-Based Scheduling of Composite Web Service Workloads Thomas Phan Wen-Syan Li IBM Almaden Research Center 650 Harry Rd.. San Jose, CA 95120 {phantom,wsl}@us. ibm.com ABSTRACT Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems. Although industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow``s web service requests to actual service provisioning in a multi-tiered, multi-organisation environment. This issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits. Because these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement. In this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised. We show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-distributed applications; D.2.8 [Software Engineering]: Metrics-complexity measures, performance measures 1. INTRODUCTION Web services can be composed into workflows to provide streamlined end-to-end functionality for human users or other systems. Although previous research efforts have looked at ways to intelligently automate the composition of web services into workflows (e.g. [1, 9]), an important remaining problem is the assignment of web service requests to the underlying web service providers in a multi-tiered runtime scenario within constraints. In this paper we address this scheduling problem and examine means to manage a large number of business process workflows in a scalable manner. The problem of scheduling web service requests to providers is relevant to modern business domains that depend on multi-tiered service provisioning. Consider the example shown in Figure 1 that illustrates our problem space. Workflows comprise multiple related business processes that are web service consumers; here we assume that the workflows represent requested service from customers or automated systems and that the workflow has already been composed with an existing choreography toolkit. These workflows are then submitted to a portal (not shown) that acts as a scheduling agent between the web service consumers and the web service providers. In this example, a workflow could represent the actions needed to instantiate a vacation itinerary, where one business process requests booking an airline ticket, another business process requests a hotel room, and so forth. Each of these requests target a particular service type (e.g. airline reservations, hotel reservations, car reservations, etc.), and for each service type, there are multiple instances of service providers that publish a web service interface. An important challenge is that the workflows must meet some quality-of-service (QoS) metric, such as end-to-end completion time of all its business processes, and that meeting or failing this goal results in the assignment of a quantitative business value metric for the workflow; intuitively, it is desired that all workflows meet their respective QoS goals. We further leverage the notion that QoS service agreements are generally agreed-upon between the web service providers and the scheduling agent such that the providers advertise some level of guaranteed QoS to the scheduler based upon runtime conditions such as turnaround time and maximum available concurrency. The resulting problem is then to schedule and assign the business processes'' requests for service types to one of the service providers for that type. The scheduling must be done such that the aggregate business value across all the workflows is maximised. In Section 3 we state the scenario as a combinatorial problem and utilise a genetic search algorithm [5] to find the best assignment of web service requests to providers. This approach converges towards an assignment that maximises the overall business value for all the workflows. In Section 4 we show through experimentation that this search heuristic finds better assignments than other algorithms (greedy, round-robin, and proportional). Further, this approach allows us to scale the number of simultaneous workflows (up to one thousand workflows in our experiments) and yet still find effective schedules. 2. RELATED WORK In the context of service assignment and scheduling, [11] maps web service calls to potential servers using linear programming, but their work is concerned with mapping only single workflows; our principal focus is on scalably scheduling multiple workflows (up 30 Service Type SuperHotels.com Business Process Business Process Workflow ... Business Process Business Process ... HostileHostels.com IncredibleInns.com Business Process Business Process Business Process ... Business Process Service Provider SkyHighAirlines.com SuperCrazyFlights.com Business Process . . . . . . Advertised QoS Service Agreement CarRentalService.com Figure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers. Each business process accesses a service type and is then mapped to a service provider for that type. to one thousand as we show later) using different business metrics and a search heuristic. [10] presents a dynamic provisioning approach that uses both predictive and reactive techniques for multi-tiered Internet application delivery. However, the provisioning techniques do not consider the challenges faced when there are alternative query execution plans and replicated data sources. [8] presents a feedback-based scheduling mechanism for multi-tiered systems with back-end databases, but unlike our work, it assumes a tighter coupling between the various components of the system. Our work also builds upon prior scheduling research. The classic job-shop scheduling problem, shown to be NP-complete [4] [3], is similar to ours in that tasks within a job must be scheduled onto machinery (c.f. our scenario is that business processes within a workflow must be scheduled onto web service providers). The salient differences are that the machines can process only one job at a time (we assume servers can multi-task but with degraded performance and a maximum concurrency level), tasks within a job cannot simultaneously run on different machines (we assume business processes can be assigned to any available server), and the principal metric of performance is the makespan, which is the time for the last task among all the jobs to complete (and as we show later, optimising on the makespan is insufficient for scheduling the business processes, necessitating different metrics). 3. DESIGN In this section we describe our model and discuss how we can find scheduling assignments using a genetic search algorithm. 3.1 Model We base our model on the simplified scenario shown in Figure 1. Specifically, we assume that users or automated systems request the execution of a workflow. The workflows comprise business processes, each of which makes one web service invocation to a service type. Further, business processes have an ordering in the workflow. The arrangement and execution of the business processes and the data flow between them are all managed by a composition or choreography tool (e.g. [1, 9]). Although composition languages can use sophisticated flow-control mechanisms such as conditional branches, for simplicity we assume the processes execute sequentially in a given order. This scenario can be naturally extended to more complex relationships that can be expressed in BPEL [7], which defines how business processes interact, messages are exchanged, activities are ordered, and exceptions are handled. Due to space constraints, we focus on the problem space presented here and will extend our model to more advanced deployment scenarios in the future. Each workflow has a QoS requirement to complete within a specified number of time units (e.g. on the order of seconds, as detailed in the Experiments section). Upon completion (or failure), the workflow is assigned a business value. We extended this approach further and considered different types of workflow completion in order to model differentiated QoS levels that can be applied by businesses (for example, to provide tiered customer service). We say that a workflow is successful if it completes within its QoS requirement, acceptable if it completes within a constant factor κ 31 of its QoS bound (in our experiments we chose κ=3), or failing if it finishes beyond κ times its QoS bound. For each category, a business value score is assigned to the workflow, with the successful category assigned the highest positive score, followed by acceptable and then failing. The business value point distribution is non-uniform across workflows, further modelling cases where some workflows are of higher priority than others. Each service type is implemented by a number of different service providers. We assume that the providers make service level agreements (SLAs) to guarantee a level of performance defined by the completion time for completing a web service invocation. Although SLAs can be complex, in this paper we assume for simplicity that the guarantees can take the form of a linear performance degradation under load. This guarantee is defined by several parameters: α is the expected completion time (for example, on the order of seconds) if the assigned workload of web service requests is less than or equal to β, the maximum concurrency, and if the workload is higher than β, the expected completion for a workload of size ω is α+ γ(ω − β) where γ is a fractional coefficient. In our experiments we vary α, β, and γ with different distributions. Ideally, all workflows would be able to finish within their QoS limits and thus maximise the aggregate business value across all workflows. However, because we model service providers with degrading performance under load, not all workflows will achieve their QoS limit: it may easily be the case that business processes are assigned to providers who are overloaded and cannot complete within the respective workflow``s QoS limit. The key research problem, then, is to assign the business processes to the web service providers with the goal of optimising on the aggregate business value of all workflows. Given that the scope of the optimisation is the entire set of workflows, it may be that the best scheduling assignments may result in some workflows having to fail in order for more workflows to succeed. This intuitive observation suggests that traditional scheduling approaches such as round-robin or proportional assignments will not fare well, which is what we observe and discuss in Section 4. On the other hand, an exhaustive search of all the possible assignments will find the best schedule, but the computational complexity is prohibitively high. Suppose there are W workflows with an average of B business processes per workflow. Further, in the worst case each business process requests one service type, for which there are P providers. There are thus W · PB combinations to explore to find the optimal assignments of business processes to providers. Even for small configurations (e.g. W =10, B=5, P=10), the computational time for exhaustive search is significant, and in our work we look to scale these parameters. In the next subsection, discuss how a genetic search algorithm can be used to converge toward the optimum scheduling assignments. 3.2 Genetic algorithm Given an exponential search space of business process assignments to web service providers, the problem is to find the optimal assignment that produces the overall highest aggregate business value across all workflows. To explore the solution space, we use a genetic algorithm (GA) search heuristic that simulates Darwinian natural selection by having members of a population compete to survive in order to pass their genetic chromosomes onto the next generation; after successive generations, there is a tendency for the chromosomes to converge toward the best combination [5] [6]. Although other search heuristics exist that can solve optimization problems (e.g. simulated annealing or steepest-ascent hillclimbing), the business process scheduling problem fits well with a GA because potential solutions can be represented in a matrix form and allows us to use prior research in effective GA chromosome recombination to form new members of the population (e.g. [2]). 0 1 2 3 4 0 1 2 0 2 1 1 0 1 0 1 0 2 1 2 0 0 1 Figure 2: An example chromosome representing a scheduling assignment of (workflow,service type) → service provider. Each row represents a workflow, and each column represents a service type. For example, here there are 3 workflows (0 to 2) and 5 service types (0 to 4). In workflow 0, any request for service type 3 goes to provider 2. Note that the service provider identifier is within a range limited to its service type (i.e. its column), so the 2 listed for service type 3 is a different server from server 2 in other columns. Chromosome representation of a solution. In Figure 2 we show an example chromosome that encodes one scheduling assignment. The representation is a 2-dimensional matrix that maps {workflow, service type} to a service provider. For a business process in workflow i and utilising service type j, the (i, j)th entry in the table is the identifier for the service provider to which the business process is assigned. Note that the service provider identifier is within a range limited to its service type. GA execution. A GA proceeds as follows. Initially a random set of chromosomes is created for the population. The chromosomes are evaluated (hashed) to some metric, and the best ones are chosen to be parents. In our problem, the evaluation produces the net business value across all workflows after executing all business processes once they are assigned to their respective service providers according to the mapping in the chromosome. The parents recombine to produce children, simulating sexual crossover, and occasionally a mutation may arise which produces new characteristics that were not available in either parent. The principal idea is that we would like the children to be different from the parents (in order to explore more of the solution space) yet not too different (in order to contain the portions of the chromosome that result in good scheduling assignments). Note that finding the global optimum is not guaranteed because the recombination and mutation are stochastic. GA recombination and mutation. As mentioned, the chromosomes are 2-dimensional matrices that represent scheduling assignments. To simulate sexual recombination of two chromosomes to produce a new child chromosome, we applied a one-point crossover scheme twice (once along each dimension). The crossover is best explained by analogy to Cartesian space as follows. A random point is chosen in the matrix to be coordinate (0, 0). Matrix elements from quadrants II and IV from the first parent and elements from quadrants I and III from the second parent are used to create the new child. This approach follows GA best practices by keeping contiguous chromosome segments together as they are transmitted from parent to child. The uni-chromosome mutation scheme randomly changes one of the service provider assignments to another provider within the available range. Other recombination and mutation schemes are an area of research in the GA community, and we look to explore new operators in future work. GA evaluation function. An important GA component is the evaluation function. Given a particular chromosome representing one scheduling mapping, the function deterministically calculates the net business value across all workloads. The business processes in each workload are assigned to service providers, and each provider``s completion time is calculated based on the service agreement guarantee using the parameters mentioned in Section 3.1, namely the unloaded completion time α, the maximum concur32 rency β, and a coefficient γ that controls the linear performance degradation under heavy load. Note that the evaluation function can be easily replaced if desired; for example, other evaluation functions can model different service provider guarantees or parallel workflows. 4. EXPERIMENTS AND RESULTS In this section we show the benefit of using our GA-based scheduler. Because we wanted to scale the scenarios up to a large number of workflows (up to 1000 in our experiments), we implemented a simulation program that allowed us to vary parameters and to measure the results with different metrics. The simulator was written in standard C++ and was run on a Linux (Fedora Core) desktop computer running at 2.8 GHz with 1GB of RAM. We compared our algorithm against alternative candidates: • A well-known round-robin algorithm that assigns each business process in circular fashion to the service providers for a particular service type. This approach provides the simplest scheme for load-balancing. • A random-proportional algorithm that proportionally assigns business processes to the service providers; that is, for a given service type, the service providers are ranked by their guaranteed completion time, and business processes are assigned proportionally to the providers based on their completion time. (We also tried a proportionality scheme based on both the completion times and maximum concurrency but attained the same results, so only the former scheme``s results are shown here.) • A strawman greedy algorithm that always assigns business processes to the service provider that has the fastest guaranteed completion time. This algorithm represents a naive approach based on greedy, local observations of each workflow without taking into consideration all workflows. In the experiments that follow, all results were averaged across 20 trials, and to help normalise the effects of randomisation used during the GA, each trial started by reading in pre-initialised data from disk. In Table 1 we list our experimental parameters. In Figure 3 we show the results of running our GA against the three candidate alternatives. The x-axis shows the number for workflows scaled up to 1000, and the y-axis shows the aggregate business value for all workflows. As can be seen, the GA consistently produces the highest business value even as the number of workflows grows; at 1000 workflows, the GA produces a 115% improvement over the next-best alternative. (Note that although we are optimising against the business value metric we defined earlier, genetic algorithms are able to converge towards the optimal value of any metric, as long as the evaluation function can consistently measure a chromosome``s value with that metric.) As expected, the greedy algorithm performs very poorly because it does the worst job at balancing load: all business processes for a given service type are assigned to only one server (the one advertised to have the fastest completion time), and as more business processes arrive, the provider``s performance degrades linearly. The round-robin scheme is initially outperformed by the randomproportional scheme up to around 120 workflows (as shown in the magnified graph of Figure 4), but as the number of workflows increases, the round-robin scheme consistently wins over randomproportional. The reason is that although the random-proportional scheme assigns business processes to providers proportionally according to the advertised completion times (which is a measure of the power of the service provider), even the best providers will eventually reach a real-world maximum concurrency for the large -2000 -1000 0 1000 2000 3000 4000 5000 6000 7000 0 200 400 600 800 1000 Aggregatebusinessvalueacrossallworkflows Total number of workflows Business value scores of scheduling algorithms Genetic algorithm Round robin Random proportional Greedy Figure 3: Net business value scores of different scheduling algorithms. -500 0 500 1000 1500 2000 2500 3000 3500 4000 0 50 100 150 200Aggregatebusinessvalueacrossallworkflows Total number of workflows Business value scores of scheduling algorithms Genetic algorithm Round robin Random proportional Greedy Figure 4: Magnification of the left-most region in Figure 3. number of workflows that we are considering. For a very large number of workflows, the round-robin scheme is able to better balance the load across all service providers. To better understand the behaviour resulting from the scheduling assignments, we show the workflow completion results in Figures 5, 6, and 7 for 100, 500, and 900 workflows, respectively. These figures show the percentage of workflows that are successful (can complete with their QoS limit), acceptable (can complete within κ=3 times their QoS limit), and failed (cannot complete within κ=3 times their QoS limit). The GA consistently produces the highest percentage of successful workflows (resulting in higher business values for the aggregate set of workflows). Further, the round-robin scheme produces better results than the random-proportional for a large number of workflows but does not perform as well as the GA.. In Figure 8 we graph the makespan resulting from the same experiments above. Makespan is a traditional metric from the job scheduling community measuring elapsed time for the last job to complete. While useful, it does not capture the high-level business value metric that we are optimising against. Indeed, the makespan is oblivious to the fact that we provide multiple levels of completion (successful, acceptable, and failed) and assign business value scores accordingly. For completeness, we note that the GA provides the fastest makespan, but it is matched by the round robin algorithm. The GA produces better business values (as shown in Figure 3) because it is able to search the solution space to find better mappings that produce more successful workflows (as shown in Figures 5 to 7). We also looked at the effect of the scheduling algorithms on balancing the load. Figure 9 shows the percentage of services providers that were accessed while the workflows ran. As expected, the greedy algorithm always hits one service provider; on the other hand, the round-robin algorithm is the fastest to spread the business 33 Experimental parameter Comment Workflows 5 to 1000 Business processes per workflow uniform random: 1 - 10 Service types 10 Service providers per service type uniform random: 1 - 10 Workflow QoS goal uniform random: 10-30 seconds Service provider completion time (α) uniform random: 1 - 12 seconds Service provider maximum concurrency (β) uniform random: 1 - 12 Service provider degradation coefficient (γ) uniform random: 0.1 - 0.9 Business value for successful workflows uniform random: 10 - 50 points Business value for acceptable workflows uniform random: 0 - 10 points Business value for failed workflows uniform random: -10 - 0 points GA: number of parents 20 GA: number of children 80 GA: number of generations 1000 Table 1: Experimental parameters Failed Acceptable (completed but not within QoS) Successful (completed within QoS) 0% 20% 40% 60% 80% 100% RoundRobinRandProportionalGreedyGeneticAlg Percentageofallworkflows Workflow behaviour, 100 workflows Figure 5: Workflow behaviour for 100 workflows. Failed Acceptable (completed but not within QoS) Successful (completed within QoS) 0% 20% 40% 60% 80% 100% RoundRobinRandProportionalGreedyGeneticAlg Percentageofallworkflows Workflow behaviour, 500 workflows Figure 6: Workflow behaviour for 500 workflows. Failed Acceptable (completed but not within QoS) Successful (completed within QoS) 0% 20% 40% 60% 80% 100% RoundRobinRandProportionalGreedyGeneticAlg Percentageofallworkflows Workflow behaviour, 500 workflows Figure 7: Workflow behaviour for 900 workflows. 0 50 100 150 200 250 300 0 200 400 600 800 1000 Makespan[seconds] Number of workflows Maximum completion time for all workflows Genetic algorithm Round robin Random proportional Greedy Figure 8: Maximum completion time for all workflows. This value is the makespan metric used in traditional scheduling research. Although useful, the makespan does not take into consideration the business value scoring in our problem domain. processes. Figure 10 is the percentage of accessed service providers (that is, the percentage of service providers represented in Figure 9) that had more assigned business processes than their advertised maximum concurrency. For example, in the greedy algorithm only one service provider is utilised, and this one provider quickly becomes saturated. On the other hand, the random-proportional algorithm uses many service providers, but because business processes are proportionally assigned with more assignments going to the better providers, there is a tendency for a smaller percentage of providers to become saturated. For completeness, we show the performance of the genetic algorithm itself in Figure 11. The algorithm scales linearly with an increasing number of workflows. We note that the round-robin, random-proportional, and greedy algorithms all finished within 1 second even for the largest workflow configuration. However, we feel that the benefit of finding much higher business value scores justifies the running time of the GA; further we would expect that the running time will improve with both software tuning as well as with a computer faster than our off-the-shelf PC. 5. CONCLUSION Business processes within workflows can be orchestrated to access web services. In this paper we looked at multi-tiered service provisioning where web service requests to service types can be mapped to different service providers. The resulting problem is that in order to support a very large number of workflows, the assignment of business process to web service provider must be intelligent. We used a business value metric to measure the be34 0 0.2 0.4 0.6 0.8 1 0 200 400 600 800 1000 Percentageofallserviceproviders Number of workflows Service providers utilised Genetic algorithm Round robin Random proportional Greedy Figure 9: The percentage of service providers utilized during workload executions. The Greedy algorithm always hits the one service provider, while the Round Robin algorithm spreads requests evenly across the providers. 0 0.2 0.4 0.6 0.8 1 0 200 400 600 800 1000 Percentageofallserviceproviders Number of workflows Service providers saturated Genetic algorithm Round robin Random proportional Greedy Figure 10: The percentage of service providers that are saturated among those providers who were utilized (that is, percentage of the service providers represented in Figure 9). A saturated service provider is one whose workload is greater that its advertised maximum concurrency. 0 5 10 15 20 25 0 200 400 600 800 1000 Runningtimeinseconds Total number of workflows Running time of genetic algorithm GA running time Figure 11: Running time of the genetic algorithm. haviour of workflows meeting or failing QoS values, and we optimised our scheduling to maximise the aggregate business value across all workflows. Since the solution space of scheduler mappings is exponential, we used a genetic search algorithm to search the space and converge toward the best schedule. With a default configuration for all parameters and using our business value scoring, the GA produced up to 115% business value improvement over the next best algorithm. Finally, because a genetic algorithm will converge towards the optimal value using any metric (even other than the business value metric we used), we believe our approach has strong potential for continuing work. In future work, we look to acquire real-world traces of web service instances in order to get better estimates of service agreement guarantees, although we expect that such guarantees between the providers and their consumers are not generally available to the public. We will also look at other QoS metrics such as CPU and I/O usage. For example, we can analyse transfer costs with varying bandwidth, latency, data size, and data distribution. Further, we hope to improve our genetic algorithm and compare it to more scheduler alternatives. Finally, since our work is complementary to existing work in web services choreography (because we rely on pre-configured workflows), we look to integrate our approach with available web service workflow systems expressed in BPEL. 6. REFERENCES [1] A. Ankolekar, et al.. DAML-S: Semantic Markup For Web Services, In Proc. of the Int``l Semantic Web Working Symposium, 2001. [2] L. Davis. Job Shop Scheduling with Genetic Algorithms, In Proc. of the Int``l Conference on Genetic Algorithms, 1985. [3] H.-L. Fang, P. Ross, and D. Corne. A Promising Genetic Algorithm Approach to Job-Shop Scheduling, Rescheduling, and Open-Shop Scheduling Problems , In Proc. on the 5th Int``l Conference on Genetic Algorithms, 1993. [4] M. Gary and D. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, 1979. [5] J. Holland. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press, 1992. [6] D. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning, Kluwer Academic Publishers, 1989. [7] Business Processes in a Web Services World, www-128. ibm.com/developerworks/ webservices/library/ws-bpelwp/. [8] G. Soundararajan, K. Manassiev, J. Chen, A. Goel, and C. Amza. Back-end Databases in Shared Dynamic Content Server Clusters, In Proc. of the IEEE Int``l Conference on Autonomic Computing, 2005. [9] B. Srivastava and J. Koehler. Web Service Composition Current Solutions and Open Problems, ICAP, 2003. [10] B. Urgaonkar, P. Shenoy, A. Chandra, and P. Goyal. Dynamic Provisioning of Multi-Tier Internet Applications, In Proc. of the IEEE Int``l Conference on Autonomic Computing, 2005. [11] L. Zeng, B. Benatallah, M. Dumas, J. Kalagnanam, and Q. Sheng. Quality Driven Web Services Composition, In Proc. of the WWW Conference, 2003. 35
Heuristics-Based Scheduling of Composite Web Service Workloads ABSTRACT Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems. Although industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow's web service requests to actual service provisioning in a multi-tiered, multi-organisation environment. This issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits. Because these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement. In this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised. We show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches. 1. INTRODUCTION Web services can be composed into workflows to provide streamlined end-to-end functionality for human users or other systems. Although previous research efforts have looked at ways to intelligently automate the composition of web services into workflows (e.g. [1, 9]), an important remaining problem is the assignment of web service requests to the underlying web service providers in a multi-tiered runtime scenario within constraints. In this paper we address this scheduling problem and examine means to manage a large number of business process workflows in a scalable manner. The problem of scheduling web service requests to providers is relevant to modern business domains that depend on multi-tiered service provisioning. Consider the example shown in Figure 1 that illustrates our problem space. Workflows comprise multiple related business processes that are web service consumers; here we assume that the workflows represent requested service from customers or automated systems and that the workflow has already been composed with an existing choreography toolkit. These workflows are then submitted to a portal (not shown) that acts as a scheduling agent between the web service consumers and the web service providers. In this example, a workflow could represent the actions needed to instantiate a vacation itinerary, where one business process requests booking an airline ticket, another business process requests a hotel room, and so forth. Each of these requests target a particular service type (e.g. airline reservations, hotel reservations, car reservations, etc.), and for each service type, there are multiple instances of service providers that publish a web service interface. An important challenge is that the workflows must meet some quality-of-service (QoS) metric, such as end-to-end completion time of all its business processes, and that meeting or failing this goal results in the assignment of a quantitative business value metric for the workflow; intuitively, it is desired that all workflows meet their respective QoS goals. We further leverage the notion that QoS service agreements are generally agreed-upon between the web service providers and the scheduling agent such that the providers advertise some level of guaranteed QoS to the scheduler based upon runtime conditions such as turnaround time and maximum available concurrency. The resulting problem is then to schedule and assign the business processes' requests for service types to one of the service providers for that type. The scheduling must be done such that the aggregate business value across all the workflows is maximised. In Section 3 we state the scenario as a combinatorial problem and utilise a genetic search algorithm [5] to find the best assignment of web service requests to providers. This approach converges towards an assignment that maximises the overall business value for all the workflows. In Section 4 we show through experimentation that this search heuristic finds better assignments than other algorithms (greedy, round-robin, and proportional). Further, this approach allows us to scale the number of simultaneous workflows (up to one thousand workflows in our experiments) and yet still find effective schedules. 2. RELATED WORK In the context of service assignment and scheduling, [11] maps web service calls to potential servers using linear programming, but their work is concerned with mapping only single workflows; our principal focus is on scalably scheduling multiple workflows (up Figure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers. Each business process accesses a service type and is then mapped to a service provider for that type. to one thousand as we show later) using different business metrics and a search heuristic. [10] presents a dynamic provisioning approach that uses both predictive and reactive techniques for multi-tiered Internet application delivery. However, the provisioning techniques do not consider the challenges faced when there are alternative query execution plans and replicated data sources. [8] presents a feedback-based scheduling mechanism for multi-tiered systems with back-end databases, but unlike our work, it assumes a tighter coupling between the various components of the system. Our work also builds upon prior scheduling research. The classic job-shop scheduling problem, shown to be NP-complete [4] [3], is similar to ours in that tasks within a job must be scheduled onto machinery (c.f. our scenario is that business processes within a workflow must be scheduled onto web service providers). The salient differences are that the machines can process only one job at a time (we assume servers can multi-task but with degraded performance and a maximum concurrency level), tasks within a job cannot simultaneously run on different machines (we assume business processes can be assigned to any available server), and the principal metric of performance is the makespan, which is the time for the last task among all the jobs to complete (and as we show later, optimising on the makespan is insufficient for scheduling the business processes, necessitating different metrics). 3. DESIGN In this section we describe our model and discuss how we can find scheduling assignments using a genetic search algorithm. 3.1 Model We base our model on the simplified scenario shown in Figure 1. Specifically, we assume that users or automated systems request the execution of a workflow. The workflows comprise business processes, each of which makes one web service invocation to a service type. Further, business processes have an ordering in the workflow. The arrangement and execution of the business processes and the data flow between them are all managed by a composition or choreography tool (e.g. [1, 9]). Although composition languages can use sophisticated flow-control mechanisms such as conditional branches, for simplicity we assume the processes execute sequentially in a given order. This scenario can be naturally extended to more complex relationships that can be expressed in BPEL [7], which defines how business processes interact, messages are exchanged, activities are ordered, and exceptions are handled. Due to space constraints, we focus on the problem space presented here and will extend our model to more advanced deployment scenarios in the future. Each workflow has a QoS requirement to complete within a specified number of time units (e.g. on the order of seconds, as detailed in the Experiments section). Upon completion (or failure), the workflow is assigned a business value. We extended this approach further and considered different types of workflow completion in order to model differentiated QoS levels that can be applied by businesses (for example, to provide tiered customer service). We say that a workflow is successful if it completes within its QoS requirement, acceptable if it completes within a constant factor κ of its QoS bound (in our experiments we chose κ = 3), or failing if it finishes beyond κ times its QoS bound. For each category, a business value score is assigned to the workflow, with the "successful" category assigned the highest positive score, followed by "acceptable" and then "failing." The business value point distribution is non-uniform across workflows, further modelling cases where some workflows are of higher priority than others. Each service type is implemented by a number of different service providers. We assume that the providers make service level agreements (SLAs) to guarantee a level of performance defined by the completion time for completing a web service invocation. Although SLAs can be complex, in this paper we assume for simplicity that the guarantees can take the form of a linear performance degradation under load. This guarantee is defined by several parameters: α is the expected completion time (for example, on the order of seconds) if the assigned workload of web service requests is less than or equal to β, the maximum concurrency, and if the workload is higher than β, the expected completion for a workload of size ω is α + γ (ω − β) where γ is a fractional coefficient. In our experiments we vary α, β, and γ with different distributions. Ideally, all workflows would be able to finish within their QoS limits and thus maximise the aggregate business value across all workflows. However, because we model service providers with degrading performance under load, not all workflows will achieve their QoS limit: it may easily be the case that business processes are assigned to providers who are overloaded and cannot complete within the respective workflow's QoS limit. The key research problem, then, is to assign the business processes to the web service providers with the goal of optimising on the aggregate business value of all workflows. Given that the scope of the optimisation is the entire set of workflows, it may be that the best scheduling assignments may result in some workflows having to fail in order for more workflows to succeed. This intuitive observation suggests that traditional scheduling approaches such as round-robin or proportional assignments will not fare well, which is what we observe and discuss in Section 4. On the other hand, an exhaustive search of all the possible assignments will find the best schedule, but the computational complexity is prohibitively high. Suppose there are W workflows with an average of B business processes per workflow. Further, in the worst case each business process requests one service type, for which there are P providers. There are thus W · PB combinations to explore to find the optimal assignments of business processes to providers. Even for small configurations (e.g. W = 10, B = 5, P = 10), the computational time for exhaustive search is significant, and in our work we look to scale these parameters. In the next subsection, discuss how a genetic search algorithm can be used to converge toward the optimum scheduling assignments. 3.2 Genetic algorithm Given an exponential search space of business process assignments to web service providers, the problem is to find the optimal assignment that produces the overall highest aggregate business value across all workflows. To explore the solution space, we use a genetic algorithm (GA) search heuristic that simulates Darwinian natural selection by having members of a population compete to survive in order to pass their genetic chromosomes onto the next generation; after successive generations, there is a tendency for the chromosomes to converge toward the best combination [5] [6]. Although other search heuristics exist that can solve optimization problems (e.g. simulated annealing or steepest-ascent hillclimbing), the business process scheduling problem fits well with a GA because potential solutions can be represented in a matrix form and allows us to use prior research in effective GA chromosome recombination to form new members of the population (e.g. [2]). Figure 2: An example chromosome representing a scheduling assignment of (workflow, service type)--+ service provider. Each row represents a workflow, and each column represents a service type. For example, here there are 3 workflows (0 to 2) and 5 service types (0 to 4). In workflow 0, any request for service type 3 goes to provider 2. Note that the service provider identifier is within a range limited to its service type (i.e. its column), so the "2" listed for service type 3 is a different server from server "2" in other columns. Chromosome representation of a solution. In Figure 2 we show an example chromosome that encodes one scheduling assignment. The representation is a 2-dimensional matrix that maps {workflow, service type} to a service provider. For a business process in workflow i and utilising service type j, the (i, j) th entry in the table is the identifier for the service provider to which the business process is assigned. Note that the service provider identifier is within a range limited to its service type. GA execution. A GA proceeds as follows. Initially a random set of chromosomes is created for the population. The chromosomes are evaluated (hashed) to some metric, and the best ones are chosen to be parents. In our problem, the evaluation produces the net business value across all workflows after executing all business processes once they are assigned to their respective service providers according to the mapping in the chromosome. The parents recombine to produce children, simulating sexual crossover, and occasionally a mutation may arise which produces new characteristics that were not available in either parent. The principal idea is that we would like the children to be different from the parents (in order to explore more of the solution space) yet not too different (in order to contain the portions of the chromosome that result in good scheduling assignments). Note that finding the global optimum is not guaranteed because the recombination and mutation are stochastic. GA recombination and mutation. As mentioned, the chromosomes are 2-dimensional matrices that represent scheduling assignments. To simulate sexual recombination of two chromosomes to produce a new child chromosome, we applied a one-point crossover scheme twice (once along each dimension). The crossover is best explained by analogy to Cartesian space as follows. A random point is chosen in the matrix to be coordinate (0, 0). Matrix elements from quadrants II and IV from the first parent and elements from quadrants I and III from the second parent are used to create the new child. This approach follows GA best practices by keeping contiguous chromosome segments together as they are transmitted from parent to child. The uni-chromosome mutation scheme randomly changes one of the service provider assignments to another provider within the available range. Other recombination and mutation schemes are an area of research in the GA community, and we look to explore new operators in future work. GA evaluation function. An important GA component is the evaluation function. Given a particular chromosome representing one scheduling mapping, the function deterministically calculates the net business value across all workloads. The business processes in each workload are assigned to service providers, and each provider's completion time is calculated based on the service agreement guarantee using the parameters mentioned in Section 3.1, namely the unloaded completion time α, the maximum concur rency β, and a coefficient γ that controls the linear performance degradation under heavy load. Note that the evaluation function can be easily replaced if desired; for example, other evaluation functions can model different service provider guarantees or parallel workflows. 4. EXPERIMENTS AND RESULTS In this section we show the benefit of using our GA-based scheduler. Because we wanted to scale the scenarios up to a large number of workflows (up to 1000 in our experiments), we implemented a simulation program that allowed us to vary parameters and to measure the results with different metrics. The simulator was written in standard C++ and was run on a Linux (Fedora Core) desktop computer running at 2.8 GHz with 1GB of RAM. We compared our algorithm against alternative candidates: • A well-known round-robin algorithm that assigns each business process in circular fashion to the service providers for a particular service type. This approach provides the simplest scheme for load-balancing. • A random-proportional algorithm that proportionally assigns business processes to the service providers; that is, for a given service type, the service providers are ranked by their guaranteed completion time, and business processes are assigned proportionally to the providers based on their completion time. (We also tried a proportionality scheme based on both the completion times and maximum concurrency but attained the same results, so only the former scheme's results are shown here.) • A strawman greedy algorithm that always assigns business processes to the service provider that has the fastest guaranteed completion time. This algorithm represents a naive approach based on greedy, local observations of each workflow without taking into consideration all workflows. In the experiments that follow, all results were averaged across 20 trials, and to help normalise the effects of randomisation used during the GA, each trial started by reading in pre-initialised data from disk. In Table 1 we list our experimental parameters. In Figure 3 we show the results of running our GA against the three candidate alternatives. The x-axis shows the number for workflows scaled up to 1000, and the y-axis shows the aggregate business value for all workflows. As can be seen, the GA consistently produces the highest business value even as the number of workflows grows; at 1000 workflows, the GA produces a 115% improvement over the next-best alternative. (Note that although we are optimising against the business value metric we defined earlier, genetic algorithms are able to converge towards the optimal value of any metric, as long as the evaluation function can consistently measure a chromosome's value with that metric.) As expected, the greedy algorithm performs very poorly because it does the worst job at balancing load: all business processes for a given service type are assigned to only one server (the one advertised to have the fastest completion time), and as more business processes arrive, the provider's performance degrades linearly. The round-robin scheme is initially outperformed by the randomproportional scheme up to around 120 workflows (as shown in the magnified graph of Figure 4), but as the number of workflows increases, the round-robin scheme consistently wins over randomproportional. The reason is that although the random-proportional scheme assigns business processes to providers proportionally according to the advertised completion times (which is a measure of the "power" of the service provider), even the best providers will eventually reach a real-world maximum concurrency for the large Figure 3: Net business value scores of different scheduling algorithms. Figure 4: Magnification of the left-most region in Figure 3. number of workflows that we are considering. For a very large number of workflows, the round-robin scheme is able to better balance the load across all service providers. To better understand the behaviour resulting from the scheduling assignments, we show the workflow completion results in Figures 5, 6, and 7 for 100, 500, and 900 workflows, respectively. These figures show the percentage of workflows that are successful (can complete with their QoS limit), acceptable (can complete within r, = 3 times their QoS limit), and failed (cannot complete within r, = 3 times their QoS limit). The GA consistently produces the highest percentage of successful workflows (resulting in higher business values for the aggregate set of workflows). Further, the round-robin scheme produces better results than the random-proportional for a large number of workflows but does not perform as well as the GA. . In Figure 8 we graph the "makespan" resulting from the same experiments above. Makespan is a traditional metric from the job scheduling community measuring elapsed time for the last job to complete. While useful, it does not capture the high-level business value metric that we are optimising against. Indeed, the makespan is oblivious to the fact that we provide multiple levels of completion (successful, acceptable, and failed) and assign business value scores accordingly. For completeness, we note that the GA provides the fastest makespan, but it is matched by the round robin algorithm. The GA produces better business values (as shown in Figure 3) because it is able to search the solution space to find better mappings that produce more successful workflows (as shown in Figures 5 to 7). We also looked at the effect of the scheduling algorithms on balancing the load. Figure 9 shows the percentage of services providers that were accessed while the workflows ran. As expected, the greedy algorithm always hits one service provider; on the other hand, the round-robin algorithm is the fastest to spread the business Table 1: Experimental parameters Figure 8: Maximum completion time for all workflows. This value is the "makespan" metric used in traditional scheduling research. Although useful, the makespan does not take into consideration the business value scoring in our problem domain. processes. Figure 10 is the percentage of accessed service providers (that is, the percentage of service providers represented in Figure 9) that had more assigned business processes than their advertised maximum concurrency. For example, in the greedy algorithm only one service provider is utilised, and this one provider quickly becomes saturated. On the other hand, the random-proportional algorithm uses many service providers, but because business processes are proportionally assigned with more assignments going to the better providers, there is a tendency for a smaller percentage of providers to become saturated. For completeness, we show the performance of the genetic algorithm itself in Figure 11. The algorithm scales linearly with an increasing number of workflows. We note that the round-robin, random-proportional, and greedy algorithms all finished within 1 second even for the largest workflow configuration. However, we feel that the benefit of finding much higher business value scores justifies the running time of the GA; further we would expect that the running time will improve with both software tuning as well as with a computer faster than our off-the-shelf PC. 5. CONCLUSION Business processes within workflows can be orchestrated to access web services. In this paper we looked at multi-tiered service provisioning where web service requests to service types can be mapped to different service providers. The resulting problem is that in order to support a very large number of workflows, the assignment of business process to web service provider must be intelligent. We used a business value metric to measure the be Figure 5: Workflow behaviour for 100 workflows. Figure 6: Workflow behaviour for 500 workflows. Figure 7: Workflow behaviour for 900 workflows. Figure 9: The percentage of service providers utilized during workload executions. The Greedy algorithm always hits the one service provider, while the Round Robin algorithm spreads requests evenly across the providers. Figure 10: The percentage of service providers that are saturated among those providers who were utilized (that is, percentage of the service providers represented in Figure 9). A saturated service provider is one whose workload is greater that its advertised maximum concurrency. Figure 11: Running time of the genetic algorithm. haviour of workflows meeting or failing QoS values, and we optimised our scheduling to maximise the aggregate business value across all workflows. Since the solution space of scheduler mappings is exponential, we used a genetic search algorithm to search the space and converge toward the best schedule. With a default configuration for all parameters and using our business value scoring, the GA produced up to 115% business value improvement over the next best algorithm. Finally, because a genetic algorithm will converge towards the optimal value using any metric (even other than the business value metric we used), we believe our approach has strong potential for continuing work. In future work, we look to acquire real-world traces of web service instances in order to get better estimates of service agreement guarantees, although we expect that such guarantees between the providers and their consumers are not generally available to the public. We will also look at other QoS metrics such as CPU and I/O usage. For example, we can analyse transfer costs with varying bandwidth, latency, data size, and data distribution. Further, we hope to improve our genetic algorithm and compare it to more scheduler alternatives. Finally, since our work is complementary to existing work in web services choreography (because we rely on pre-configured workflows), we look to integrate our approach with available web service workflow systems expressed in BPEL.
Heuristics-Based Scheduling of Composite Web Service Workloads ABSTRACT Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems. Although industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow's web service requests to actual service provisioning in a multi-tiered, multi-organisation environment. This issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits. Because these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement. In this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised. We show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches. 1. INTRODUCTION Web services can be composed into workflows to provide streamlined end-to-end functionality for human users or other systems. Although previous research efforts have looked at ways to intelligently automate the composition of web services into workflows (e.g. [1, 9]), an important remaining problem is the assignment of web service requests to the underlying web service providers in a multi-tiered runtime scenario within constraints. In this paper we address this scheduling problem and examine means to manage a large number of business process workflows in a scalable manner. The problem of scheduling web service requests to providers is relevant to modern business domains that depend on multi-tiered service provisioning. Consider the example shown in Figure 1 that illustrates our problem space. Workflows comprise multiple related business processes that are web service consumers; here we assume that the workflows represent requested service from customers or automated systems and that the workflow has already been composed with an existing choreography toolkit. These workflows are then submitted to a portal (not shown) that acts as a scheduling agent between the web service consumers and the web service providers. In this example, a workflow could represent the actions needed to instantiate a vacation itinerary, where one business process requests booking an airline ticket, another business process requests a hotel room, and so forth. Each of these requests target a particular service type (e.g. airline reservations, hotel reservations, car reservations, etc.), and for each service type, there are multiple instances of service providers that publish a web service interface. An important challenge is that the workflows must meet some quality-of-service (QoS) metric, such as end-to-end completion time of all its business processes, and that meeting or failing this goal results in the assignment of a quantitative business value metric for the workflow; intuitively, it is desired that all workflows meet their respective QoS goals. We further leverage the notion that QoS service agreements are generally agreed-upon between the web service providers and the scheduling agent such that the providers advertise some level of guaranteed QoS to the scheduler based upon runtime conditions such as turnaround time and maximum available concurrency. The resulting problem is then to schedule and assign the business processes' requests for service types to one of the service providers for that type. The scheduling must be done such that the aggregate business value across all the workflows is maximised. In Section 3 we state the scenario as a combinatorial problem and utilise a genetic search algorithm [5] to find the best assignment of web service requests to providers. This approach converges towards an assignment that maximises the overall business value for all the workflows. In Section 4 we show through experimentation that this search heuristic finds better assignments than other algorithms (greedy, round-robin, and proportional). Further, this approach allows us to scale the number of simultaneous workflows (up to one thousand workflows in our experiments) and yet still find effective schedules. 2. RELATED WORK In the context of service assignment and scheduling, [11] maps web service calls to potential servers using linear programming, but their work is concerned with mapping only single workflows; our principal focus is on scalably scheduling multiple workflows (up Figure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers. Each business process accesses a service type and is then mapped to a service provider for that type. to one thousand as we show later) using different business metrics and a search heuristic. [10] presents a dynamic provisioning approach that uses both predictive and reactive techniques for multi-tiered Internet application delivery. However, the provisioning techniques do not consider the challenges faced when there are alternative query execution plans and replicated data sources. [8] presents a feedback-based scheduling mechanism for multi-tiered systems with back-end databases, but unlike our work, it assumes a tighter coupling between the various components of the system. Our work also builds upon prior scheduling research. The classic job-shop scheduling problem, shown to be NP-complete [4] [3], is similar to ours in that tasks within a job must be scheduled onto machinery (c.f. our scenario is that business processes within a workflow must be scheduled onto web service providers). The salient differences are that the machines can process only one job at a time (we assume servers can multi-task but with degraded performance and a maximum concurrency level), tasks within a job cannot simultaneously run on different machines (we assume business processes can be assigned to any available server), and the principal metric of performance is the makespan, which is the time for the last task among all the jobs to complete (and as we show later, optimising on the makespan is insufficient for scheduling the business processes, necessitating different metrics). 3. DESIGN 3.1 Model 3.2 Genetic algorithm 4. EXPERIMENTS AND RESULTS 5. CONCLUSION Business processes within workflows can be orchestrated to access web services. In this paper we looked at multi-tiered service provisioning where web service requests to service types can be mapped to different service providers. The resulting problem is that in order to support a very large number of workflows, the assignment of business process to web service provider must be intelligent. We used a business value metric to measure the be Figure 5: Workflow behaviour for 100 workflows. Figure 6: Workflow behaviour for 500 workflows. Figure 7: Workflow behaviour for 900 workflows. Figure 9: The percentage of service providers utilized during workload executions. The Greedy algorithm always hits the one service provider, while the Round Robin algorithm spreads requests evenly across the providers. Figure 10: The percentage of service providers that are saturated among those providers who were utilized (that is, percentage of the service providers represented in Figure 9). A saturated service provider is one whose workload is greater that its advertised maximum concurrency. Figure 11: Running time of the genetic algorithm. haviour of workflows meeting or failing QoS values, and we optimised our scheduling to maximise the aggregate business value across all workflows. Since the solution space of scheduler mappings is exponential, we used a genetic search algorithm to search the space and converge toward the best schedule. With a default configuration for all parameters and using our business value scoring, the GA produced up to 115% business value improvement over the next best algorithm. Finally, because a genetic algorithm will converge towards the optimal value using any metric (even other than the business value metric we used), we believe our approach has strong potential for continuing work. In future work, we look to acquire real-world traces of web service instances in order to get better estimates of service agreement guarantees, although we expect that such guarantees between the providers and their consumers are not generally available to the public. We will also look at other QoS metrics such as CPU and I/O usage. For example, we can analyse transfer costs with varying bandwidth, latency, data size, and data distribution. Further, we hope to improve our genetic algorithm and compare it to more scheduler alternatives. Finally, since our work is complementary to existing work in web services choreography (because we rely on pre-configured workflows), we look to integrate our approach with available web service workflow systems expressed in BPEL.
Heuristics-Based Scheduling of Composite Web Service Workloads ABSTRACT Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems. Although industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow's web service requests to actual service provisioning in a multi-tiered, multi-organisation environment. This issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits. Because these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement. In this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised. We show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches. 1. INTRODUCTION Web services can be composed into workflows to provide streamlined end-to-end functionality for human users or other systems. Although previous research efforts have looked at ways to intelligently automate the composition of web services into workflows (e.g. [1, 9]), an important remaining problem is the assignment of web service requests to the underlying web service providers in a multi-tiered runtime scenario within constraints. In this paper we address this scheduling problem and examine means to manage a large number of business process workflows in a scalable manner. The problem of scheduling web service requests to providers is relevant to modern business domains that depend on multi-tiered service provisioning. Consider the example shown in Figure 1 that illustrates our problem space. Workflows comprise multiple related business processes that are web service consumers; here we assume that the workflows represent requested service from customers or automated systems and that the workflow has already been composed with an existing choreography toolkit. These workflows are then submitted to a portal (not shown) that acts as a scheduling agent between the web service consumers and the web service providers. In this example, a workflow could represent the actions needed to instantiate a vacation itinerary, where one business process requests booking an airline ticket, another business process requests a hotel room, and so forth. The resulting problem is then to schedule and assign the business processes' requests for service types to one of the service providers for that type. The scheduling must be done such that the aggregate business value across all the workflows is maximised. In Section 3 we state the scenario as a combinatorial problem and utilise a genetic search algorithm [5] to find the best assignment of web service requests to providers. This approach converges towards an assignment that maximises the overall business value for all the workflows. In Section 4 we show through experimentation that this search heuristic finds better assignments than other algorithms (greedy, round-robin, and proportional). Further, this approach allows us to scale the number of simultaneous workflows (up to one thousand workflows in our experiments) and yet still find effective schedules. 2. RELATED WORK In the context of service assignment and scheduling, [11] maps web service calls to potential servers using linear programming, but their work is concerned with mapping only single workflows; our principal focus is on scalably scheduling multiple workflows (up Figure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers. Each business process accesses a service type and is then mapped to a service provider for that type. to one thousand as we show later) using different business metrics and a search heuristic. [10] presents a dynamic provisioning approach that uses both predictive and reactive techniques for multi-tiered Internet application delivery. Our work also builds upon prior scheduling research. The classic job-shop scheduling problem, shown to be NP-complete [4] [3], is similar to ours in that tasks within a job must be scheduled onto machinery (c.f. our scenario is that business processes within a workflow must be scheduled onto web service providers). 5. CONCLUSION Business processes within workflows can be orchestrated to access web services. In this paper we looked at multi-tiered service provisioning where web service requests to service types can be mapped to different service providers. The resulting problem is that in order to support a very large number of workflows, the assignment of business process to web service provider must be intelligent. We used a business value metric to measure the be Figure 5: Workflow behaviour for 100 workflows. Figure 6: Workflow behaviour for 500 workflows. Figure 7: Workflow behaviour for 900 workflows. Figure 9: The percentage of service providers utilized during workload executions. The Greedy algorithm always hits the one service provider, while the Round Robin algorithm spreads requests evenly across the providers. Figure 10: The percentage of service providers that are saturated among those providers who were utilized (that is, percentage of the service providers represented in Figure 9). A saturated service provider is one whose workload is greater that its advertised maximum concurrency. Figure 11: Running time of the genetic algorithm. haviour of workflows meeting or failing QoS values, and we optimised our scheduling to maximise the aggregate business value across all workflows. Since the solution space of scheduler mappings is exponential, we used a genetic search algorithm to search the space and converge toward the best schedule. With a default configuration for all parameters and using our business value scoring, the GA produced up to 115% business value improvement over the next best algorithm. Finally, because a genetic algorithm will converge towards the optimal value using any metric (even other than the business value metric we used), we believe our approach has strong potential for continuing work. We will also look at other QoS metrics such as CPU and I/O usage. Further, we hope to improve our genetic algorithm and compare it to more scheduler alternatives. Finally, since our work is complementary to existing work in web services choreography (because we rely on pre-configured workflows), we look to integrate our approach with available web service workflow systems expressed in BPEL.
C-67
A Holistic Approach to High-Performance Computing: Xgrid Experience
The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design. With a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education. We have found that Mac OS X is the best operating system to train future artists and designers. Moreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations. As visual artists cross from paint on canvas to creating in the digital realm, the demand for a high-performance computing environment grows. In our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use. In order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing. As with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid. Therefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues. In our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support. Furthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process.
[ "xgrid", "design", "visual art", "design educ", "mac os x", "oper system", "high-end graphic", "nonlinear video edit", "anim", "multimedia", "web product", "digit video applic", "render", "xgrid environ", "grid comput", "larg-scale integr of technolog", "macintosh os x", "cluster", "highperform comput", "rendezv" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "U", "M", "U" ]
A Holistic Approach to High-Performance Computing: Xgrid Experience David Przybyla Ringling School of Art and Design 2700 North Tamiami Trail Sarasota, Florida 34234 941-309-4720 dprzybyl@ringling.edu Karissa Miller Ringling School of Art and Design 2700 North Tamiami Trail Sarasota, Florida 34234 941-359-7670 kmiller@ringling.edu Mahmoud Pegah Ringling School of Art and Design 2700 North Tamiami Trail Sarasota, Florida 34234 941-359-7625 mpegah@ringling.edu ABSTRACT The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design. With a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education. We have found that Mac OS X is the best operating system to train future artists and designers. Moreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations. As visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows. In our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use. In order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing. As with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid. Therefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues. In our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support. Furthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systemsdistributed applications. General Terms Management, Documentation, Performance, Design, Economics, Reliability, Experimentation. 1. INTRODUCTION Grid computing does not have a single, universally accepted definition. The technology behind grid computing model is not new. Its roots lie in early distributed computing models that date back to early 1980s, where scientists harnessed the computing power of idle workstations to let compute intensive applications to run on multiple workstations, which dramatically shortening processing times. Although numerous distributed computing models were available for discipline-specific scientific applications, only recently have the tools became available to use general-purpose applications on a grid. Consequently, the grid computing model is gaining popularity and has become a show piece of ``utility computing''. Since in the IT industry, various computing models are used interchangeably with grid computing, we first sort out the similarities and difference between these computing models so that grid computing can be placed in perspective. 1.1 Clustering A cluster is a group of machines in a fixed configuration united to operate and be managed as a single entity to increase robustness and performance. The cluster appears as a single high-speed system or a single highly available system. In this model, resources can not enter and leave the group as necessary. There are at least two types of clusters: parallel clusters and highavailability clusters. Clustered machines are generally in spatial proximity, such as in the same server room, and dedicated solely to their task. In a high-availability cluster, each machine provides the same service. If one machine fails, another seamlessly takes over its workload. For example, each computer could be a web server for a web site. Should one web server ``die,'' another provides the service, so that the web site rarely, if ever, goes down. A parallel cluster is a type of supercomputer. Problems are split into many parts, and individual cluster members are given part of the problem to solve. An example of a parallel cluster is composed of Apple Power Mac G5 computers at Virginia Tech University [1]. 1.2 Distributed Computing Distributed computing spatially expands network services so that the components providing the services are separated. The major objective of this computing model is to consolidate processing power over a network. A simple example is spreading services such as file and print serving, web serving, and data storage across multiple machines rather than a single machine handling all the tasks. Distributed computing can also be more fine-grained, where even a single application is broken into parts and each part located on different machines: a word processor on one server, a spell checker on a second server, etc. 1.3 Utility Computing Literally, utility computing resembles common utilities such as telephone or electric service. A service provider makes computing resources and infrastructure management available to a customer as needed, and charges for usage rather than a flat rate. The important thing to note is that resources are only used as needed, and not dedicated to a single customer. 1.4 Grid Computing Grid computing contains aspects of clusters, distributed computing, and utility computing. In the most basic sense, grid turns a group of heterogeneous systems into a centrally managed but flexible computing environment that can work on tasks too time intensive for the individual systems. The grid members are not necessarily in proximity, but must merely be accessible over a network; the grid can access computers on a LAN, WAN, or anywhere in the world via the Internet. In addition, the computers comprising the grid need not be dedicated to the grid; rather, they can function as normal workstations, and then advertise their availability to the grid when not in use. The last characteristic is the most fundamental to the grid described in this paper. A well-known example of such an ad hoc grid is the SETI@home project [2] of the University of California at Berkeley, which allows any person in the world with a computer and an Internet connection to donate unused processor time for analyzing radio telescope data. 1.5 Comparing the Grid and Cluster A computer grid expands the capabilities of the cluster by loosing its spatial bounds, so that any computer accessible through the network gains the potential to augment the grid. A fundamental grid feature is that it scales well. The processing power of any machine added to the grid is immediately availably for solving problems. In addition, the machines on the grid can be generalpurpose workstations, which keep down the cost of expanding the grid. 2. ASSESSING THE NEED FOR GRID COMPUTING Effective use of a grid requires a computation that can be divided into independent (i.e., parallel) tasks. The results of each task cannot depend on the results of any other task, and so the members of the grid can solve the tasks in parallel. Once the tasks have been completed, the results can be assembled into the solution. Examples of parallelizable computations are the Mandelbrot set of fractals, the Monte Carlo calculations used in disciplines such as Solid State Physics, and the individual frames of a rendered animation. This paper is concerned with the last example. 2.1 Applications Appropriate for Grid Computing The applications used in grid computing must either be specifically designed for grid use, or scriptable in such a way that they can receive data from the grid, process the data, and then return results. In other words, the best candidates for grid computing are applications that run the same or very similar computations on a large number of pieces of data without any dependencies on the previous calculated results. Applications heavily dependent on data handling rather than processing power are generally more suitable to run on a traditional environment than on a grid platform. Of course, the applications must also run on the computing platform that hosts the grid. Our interest is in using the Alias Maya application [3] with Apple``s Xgrid [4] on Mac OS X. Commercial applications usually have strict license requirements. This is an important concern if we install a commercial application such as Maya on all members of our grid. By its nature, the size of the grid may change as the number of idle computers changes. How many licenses will be required? Our resolution of this issue will be discussed in a later section. 2.2 Integration into the Existing Infrastructure The grid requires a controller that recognizes when grid members are available, and parses out job to available members. The controller must be able to see members on the network. This does not require that members be on the same subnet as the controller, but if they are not, any intervening firewalls and routers must be configured to allow grid traffic. 3. XGRID Xgrid is Apple``s grid implementation. It was inspired by Zilla, a desktop clustering application developed by NeXT and acquired by Apple. In this report we describe the Xgrid Technology Preview 2, a free download that requires Mac OS X 10.2.8 or later and a minimum 128 MB RAM [5]. Xgrid, leverages Apple``s traditional ease of use and configuration. If the grid members are on the same subnet, by default Xgrid automatically discovers available resources through Rendezvous [6]. Tasks are submitted to the grid through a GUI interface or by the command line. A System Preference Pane controls when each computer is available to the grid. It may be best to view Xgrid as a facilitator. The Xgrid architecture handles software and data distribution, job execution, and result aggregation. However, Xgrid does not perform the actual calculations. 3.1 Xgrid Components Xgrid has three major components: the client, controller, and the agent. Each component is included in the default installation, and any computer can easily be configured to assume any role. In 120 fact, for testing purposes, a computer can simultaneously assume all roles in local mode. The more typical production use is called cluster mode. The client submits jobs to the controller through the Xgrid GUI or command line. The client defines how the job will be broken into tasks for the grid. If any files or executables must be sent as part of a job, they must reside on the client or at a location accessible to the client. When a job is complete, the client can retrieve the results from the controller. A client can only connect to a single controller at a time. The controller runs the GridServer process. It queues tasks received from clients, distributes those tasks to the agents, and handles failover if an agent cannot complete a task. In Xgrid Technology Preview 2, a controller can handle a maximum of 10,000 agent connections. Only one controller can exist per logical grid. The agents run the GridAgent process. When the GridAgent process starts, it registers with a controller; an agent can only be connected to one controller at a time. Agents receive tasks from their controller, perform the specified computations, and then send the results back to the controller. An agent can be configured to always accept tasks, or to just accept them when the computer is not otherwise busy. 3.2 Security and Authentication By default, Xgrid requires two passwords. First, a client needs a password to access a controller. Second, the controller needs a password to access an agent. Either password requirement can be disabled. Xgrid uses two-way-random mutual authentication protocol with MD5 hashes. At this time, data encryption is only used for passwords. As mentioned earlier, an agent registers with a controller when the GridAgent process starts. There is no native method for the controller to reject agents, and so it must accept any agent that registers. This means that any agent could submit a job that consumes excessive processor and disk space on the agents. Of course, since Mac OS X is a BSD-based operating system, the controller could employ Unix methods of restricting network connections from agents. The Xgrid daemons run as the user nobody, which means the daemons can read, write, or execute any file according to world permissions. Thus, Xgrid jobs can execute many commands and write to /tmp and /Volumes. In general, this is not a major security risk, but is does require a level of trust between all members of the grid. 3.3 Using Xgrid 3.3.1 Installation Basic Xgrid installation and configuration is described both in Apple documentation [5] and online at the University of Utah web site [8]. The installation is straightforward and offers no options for customization. This means that every computer on which Xgrid is installed has the potential to be a client, controller, or agent. 3.3.2 Agent and Controller Configuration The agents and controllers can be configured through the Xgrid Preference Pane in the System Preferences or XML files in /Library/Preferences. Here the GridServer and GridAgent processes are started, passwords set, and the controller discovery method used by agents is selected. By default, agents use Rendezvous to find a controller, although the agents can also be configured to look for a specific host. The Xgrid Preference Pane also sets whether the Agents will always accept jobs, or only accept jobs when idle. In Xgrid terms, idle either means that the Xgrid screen saver has activated, or the mouse and keyboard have not been used for more than 15 minutes. Even if the agent is configured to always accept tasks, if the computer is being used these tasks will run in the background at a low priority. However, if an agent only accepts jobs when idle, any unfinished task being performed when the computer ceases being idle are immediately stopped and any intermediary results lost. Then the controller assigns the task to another available member of the grid. Advertising the controller via Rendezvous can be disabled by editing /Library/Preferences/com. apple.xgrid.controller.plist. This, however, will not prevent an agent from connecting to the controller by hostname. 3.3.3 Sending Jobs from an Xgrid Client The client sends jobs to the controller either through the Xgrid GUI or the command line. The Xgrid GUI submits jobs via small applications called plug-ins. Sample plug-ins are provided by Apple, but they are only useful as simple testing or as examples of how to create a custom plug-in. If we are to employ Xgrid for useful work, we will require a custom plug-in. James Reynolds details the creation of custom plug-ins on the University of Utah Mac OS web site [8]. Xgrid stores plug-ins in /Library/Xgrid/Plug-ins or ~/Library/Xgrid/Plug-ins, depending on whether the plug-in was installed with Xgrid or created by a user. The core plug-in parameter is the command, which includes the executable the agents will run. Another important parameter is the working directory. This directory contains necessary files that are not installed on the agents or available to them over a network. The working directory will always be copied to each agent, so it is best to keep this directory small. If the files are installed on the agents or available over a network, the working directory parameter is not needed. The command line allows the options available with the GUI plug-in, but it can be slightly more cumbersome. However, the command line probably will be the method of choice for serious work. The command arguments must be included in a script unless they are very basic. This can be a shell, perl, or python script, as long as the agent can interpret it. 3.3.4 Running the Xgrid Job When the Xgrid job is started, the command tells the controller how to break the job into tasks for the agents. Then the command is tarred and gzipped and sent to each agent; if there is a working directory, this is also tarred and gzipped and sent to the agents. 121 The agents extract these files into /tmp and run the task. Recall that since the GridAgent process runs as the user nobody, everything associated with the command must be available to nobody. Executables called by the command should be installed on the agents unless they are very simple. If the executable depends on libraries or other files, it may not function properly if transferred, even if the dependent files are referenced in the working directory. When the task is complete, the results are available to the client. In principle, the results are sent to the client, but whether this actually happens depends on the command. If the results are not sent to the client, they will be in /tmp on each agent. When available, a better solution is to direct the results to a network volume accessible to the client. 3.4 Limitations and Idiosyncrasies Since Xgrid is only in its second preview release, there are some rough edges and limitations. Apple acknowledges some limitations [7]. For example, the controller cannot determine whether an agent is trustworthy and the controller always copies the command and working directory to the agent without checking to see if these exist on the agent. Other limitations are likely just a by-product of an unfinished work. Neither the client nor controller can specify which agents will receive the tasks, which is particularly important if the agents contain a variety of processor types and speeds and the user wants to optimize the calculations. At this time, the best solution to this problem may be to divide the computers into multiple logical grids. There is also no standard way to monitor the progress of a running job on each agent. The Xgrid GUI and command line indicate which agents are working on tasks, but gives no indication of progress. Finally, at this time only Mac OS X clients can submit jobs to the grid. The framework exists to allow third parties to write plug-ins for other Unix flavors, but Apple has not created them. 4. XGRID IMPLEMENTATION Our goal is an Xgrid render farm for Alias Maya. The Ringling School has about 400 Apple Power Mac G4``s and G5``s in 13 computer labs. The computers range from 733 MHz singleprocessor G4``s and 500 MHz and 1 GHz dual-processor G4``s to 1.8 GHz dual-processor G5``s. All of these computers are lightly used in the evening and on weekends and represent an enormous processing resource for our student rendering projects. 4.1 Software Installation During our Xgrid testing, we loaded software on each computer multiple times, including the operating systems. We saved time by facilitating our installations with the remote administration daemon (radmind) software developed at the University of Michigan [9], [10]. Everything we installed for testing was first created as a radmind base load or overload. Thus, Mac OS X, Mac OS X Developer Tools, Xgrid, POV-Ray [11], and Alias Maya were stored on a radmind server and then installed on our test computers when needed. 4.2 Initial Testing We used six 1.8 GHz dual-processor Apple Power Mac G5``s for our Xgrid tests. Each computer ran Mac OS X 10.3.3 and contained 1 GB RAM. As shown in Figure 1, one computer served as both client and controller, while the other five acted as agents. Before attempting Maya rendering with Xgrid, we performed basic calculations to cement our understanding of Xgrid. Apple``s Xgrid documentation is sparse, so finding helpful web sites facilitated our learning. We first ran the Mandelbrot set plug-in provided by Apple, which allowed us to test the basic functionality of our grid. Then we performed benchmark rendering with the Open Source Application POV-Ray, as described by Daniel Côté [12] and James Reynolds [8]. Our results showed that one dual-processor G5 rendering the benchmark POV-Ray image took 104 minutes. Breaking the image into three equal parts and using Xgrid to send the parts to three agents required 47 minutes. However, two agents finished their rendering in 30 minutes, while the third agent used 47 minutes; the entire render was only as fast as the slowest agent. These results gave us two important pieces of information. First, the much longer rendering time for one of the tasks indicated that we should be careful how we split jobs into tasks for the agents. All portions of the rendering will not take equal amounts of time, even if the pixel size is the same. Second, since POV-Ray cannot take advantage of both processors in a G5, neither can an Xgrid task running POV-Ray. Alias Maya does not have this limitation. 4.3 Rendering with Alias Maya 6 We first installed Alias Maya 6 for Mac OS X on the client/controller and each agent. Maya 6 requires licenses for use as a workstation application. However, if it is just used for rendering from the command line or a script, no license is needed. We thus created a minimal installation of Maya as a radmind overload. The application was installed in a hidden directory inside /Applications. This was done so that normal users of the workstations would not find and attempt to run Maya, which would fail because these installations are not licensed for such use. In addition, Maya requires the existence of a directory ending in the path /maya. The directory must be readable and writable by the Maya user. For a user running Maya on a Mac OS X workstation, the path would usually be ~/Documents/maya. Unless otherwise specified, this directory will be the default location for Maya data and output files. If the directory does not Figure 1. Xgrid test grid. Client/ Controller Agent 1 Agent 2 Agent 3 Agent 4 Agent 5 Network Volume Jobs Data Data 122 exist, Maya will try to create it, even if the user specifies that the data and output files exist in other locations. However, Xgrid runs as the user nobody, which does not have a home directory. Maya is unable to create the needed directory, and looks instead for /Alias/maya. This directory also does not exist, and the user nobody has insufficient rights to create it. Our solution was to manually create /Alias/maya and give the user nobody read and write permissions. We also created a network volume for storage of both the rendering data and the resulting rendered frames. This avoided sending the Maya files and associated textures to each agent as part of a working directory. Such a solution worked well for us because our computers are geographically close on a LAN; if greater distance had separated the agents from the client/controller, specifying a working directory may have been a better solution. Finally, we created a custom GUI plug-in for Xgrid. The plug-in command calls a Perl script with three arguments. Two arguments specify the beginning and end frames of the render and the third argument the number of frames in each job (which we call the cluster size). The script then calculates the total number of jobs and parses them out to the agents. For example, if we begin at frame 201 and end at frame 225, with 5 frames for each job, the plug-in will create 5 jobs and send them out to the agents. Once the jobs are sent to the agents, the script executes the /usr/sbin/Render command on each agent with the parameters appropriate for the particular job. The results are sent to the network volume. With the setup described, we were able to render with Alias Maya 6 on our test grid. Rendering speed was not important at this time; our first goal was to implement the grid, and in that we succeeded. 4.3.1 Pseudo Code for Perl Script in Custom Xgrid Plug-in In this section we summarize in simplified pseudo code format the Perl script used in our Xgrig plug-in. agent_jobs{ • Read beginning frame, end frame, and cluster size of render. • Check whether the render can be divided into an integer number of jobs based on the cluster size. • If there are not an integer number of jobs, reduce the cluster size of the last job and set its last frame to the end frame of the render. • Determine the start frame and end frame for each job. • Execute the Render command. } 4.4 Lessons Learned Rendering with Maya from the Xgrid GUI was not trivial. The lack of Xgrid documentation and the requirements of Maya combined into a confusing picture, where it was difficult to decide the true cause of the problems we encountered. Trial and error was required to determine the best way to set up our grid. The first hurdle was creating the directory /Alias/maya with read and write permissions for the user nobody. The second hurdle was learning that we got the best performance by storing the rendering data on a network volume. The last major hurdle was retrieving our results from the agents. Unlike the POV-Ray rendering tests, our initial Maya results were never returned to the client; instead, Maya stored the results in /tmp on each agent. Specifying in the plug-in where to send the results would not change this behavior. We decided this was likely a Maya issue rather than an Xgrid issue, and the solution was to send the results to the network volume via the Perl script. 5. FUTURE PLANS Maya on Xgrid is not yet ready to be used by the students of Ringling School. In order to do this, we must address at least the following concerns. • Continue our rendering tests through the command line rather than the GUI plug-in. This will be essential for the following step. • Develop an appropriate interface for users to send jobs to the Xgrid controller. This will probably be an extension to the web interface of our existing render farm, where the student specifies parameters that are placed in a script that issues the Render command. • Perform timed Maya rendering tests with Xgrid. Part of this should compare the rendering times for Power Mac G4``s and G5``s. 6. CONCLUSION Grid computing continues to advance. Recently, the IT industry has witnessed the emergence of numerous types of contemporary grid applications in addition to the traditional grid framework for compute intensive applications. For instance, peer-to-peer applications such as Kazaa, are based on storage grids that do not share processing power but instead an elegant protocol to swap files between systems. Although in our campuses we discourage students from utilizing peer-to-peer applications from music sharing, the same protocol can be utilized on applications such as decision support and data mining. The National Virtual Collaboratory grid project [13] will link earthquake researchers across the U.S. with computing resources, allowing them to share extremely large data sets, research equipment, and work together as virtual teams over the Internet. There is an assortment of new grid players in the IT world expanding the grid computing model and advancing the grid technology to the next level. SAP [14] is piloting a project to grid-enable SAP ERP applications, Dell [15] has partnered with Platform Computing to consolidate computing resources and provide grid-enabled systems for compute intensive applications, Oracle has integrated support for grid computing in their 10g release [16], United Devices [17] offers hosting service for gridon-demand, and Sun Microsystems continues their research and development of Sun``s N1 Grid engine [18] which combines grid and clustering platforms. Simply, the grid computing is up and coming. The potential benefits of grid computing are colossal in higher education learning while the implementation costs are low. Today, it would be difficult to identify an application with as high a return on investment as grid computing in information technology divisions in higher education institutions. It is a mistake to overlook this technology with such a high payback. 123 7. ACKNOWLEDGMENTS The authors would like to thank Scott Hanselman of the IT team at the Ringling School of Art and Design for providing valuable input in the planning of our Xgrid testing. We would also like to thank the posters of the Xgrid Mailing List [13] for providing insight into many areas of Xgrid. 8. REFERENCES [1] Apple Academic Research, http://www.apple.com/education/science/profiles/vatech/. [2] SETI@home: Search for Extraterrestrial Intelligence at home. http://setiathome.ssl.berkeley.edu/. [3] Alias, http://www.alias.com/. [4] Apple Computer, Xgrid, http://www.apple.com/acg/xgrid/. [5] Xgrid Guide, http://www.apple.com/acg/xgrid/, 2004. [6] Apple Mac OS X Features, http://www.apple.com/macosx/features/rendezvous/. [7] Xgrid Manual Page, 2004. [8] James Reynolds, Xgrid Presentation, University of Utah, http://www.macos.utah.edu:16080/xgrid/, 2004. [9] Research Systems Unix Group, Radmind, University of Michigan, http://rsug.itd.umich.edu/software/radmind. [10]Using the Radmind Command Line Tools to Maintain Multiple Mac OS X Machines, http://rsug.itd.umich.edu/software/radmind/files/radmindtutorial-0.8.1.pdf. [11]POV-Ray, http://www.povray.org/. [12]Daniel Côté, Xgrid example: Parallel graphics rendering in POVray, http://unu.novajo.ca/simple/, 2004. [13]NEESgrid, http://www.neesgrid.org/. [14]SAP, http://www.sap.com/. [15]Platform Computing, http://platform.com/. [16]Grid, http://www.oracle.com/technologies/grid/. [17]United Devices, Inc., http://ud.com/. [18]N1 Grid Engine 6, http://www.sun.com/ software/gridware/index. html/. [19]Xgrig Users Mailing List, http://www.lists.apple.com/mailman/listinfo/xgridusers/. 124
A Holistic Approach to High-Performance Computing: Xgrid Experience ABSTRACT The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design. With a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education. We have found that Mac OS X is the best operating system to train future artists and designers. Moreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations. As visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows. In our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use. In order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing. As with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid. Therefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues. In our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support. Furthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process. 1. INTRODUCTION Grid computing does not have a single, universally accepted definition. The technology behind grid computing model is not new. Its roots lie in early distributed computing models that date back to early 1980s, where scientists harnessed the computing power of idle workstations to let compute intensive applications to run on multiple workstations, which dramatically shortening processing times. Although numerous distributed computing models were available for discipline-specific scientific applications, only recently have the tools became available to use general-purpose applications on a grid. Consequently, the grid computing model is gaining popularity and has become a show piece of "utility computing". Since in the IT industry, various computing models are used interchangeably with grid computing, we first sort out the similarities and difference between these computing models so that grid computing can be placed in perspective. 1.1 Clustering A cluster is a group of machines in a fixed configuration united to operate and be managed as a single entity to increase robustness and performance. The cluster appears as a single high-speed system or a single highly available system. In this model, resources cannot enter and leave the group as necessary. There are at least two types of clusters: parallel clusters and highavailability clusters. Clustered machines are generally in spatial proximity, such as in the same server room, and dedicated solely to their task. In a high-availability cluster, each machine provides the same service. If one machine fails, another seamlessly takes over its workload. For example, each computer could be a web server for a web site. Should one web server "die," another provides the service, so that the web site rarely, if ever, goes down. A parallel cluster is a type of supercomputer. Problems are split into many parts, and individual cluster members are given part of the problem to solve. An example of a parallel cluster is composed of Apple Power Mac G5 computers at Virginia Tech University [1]. 1.2 Distributed Computing Distributed computing spatially expands network services so that the components providing the services are separated. The major objective of this computing model is to consolidate processing power over a network. A simple example is spreading services such as file and print serving, web serving, and data storage across multiple machines rather than a single machine handling all the tasks. Distributed computing can also be more fine-grained, where even a single application is broken into parts and each part located on different machines: a word processor on one server, a spell checker on a second server, etc. . 1.3 Utility Computing Literally, utility computing resembles common utilities such as telephone or electric service. A service provider makes computing resources and infrastructure management available to a customer as needed, and charges for usage rather than a flat rate. The important thing to note is that resources are only used as needed, and not dedicated to a single customer. 1.4 Grid Computing Grid computing contains aspects of clusters, distributed computing, and utility computing. In the most basic sense, grid turns a group of heterogeneous systems into a centrally managed but flexible computing environment that can work on tasks too time intensive for the individual systems. The grid members are not necessarily in proximity, but must merely be accessible over a network; the grid can access computers on a LAN, WAN, or anywhere in the world via the Internet. In addition, the computers comprising the grid need not be dedicated to the grid; rather, they can function as normal workstations, and then advertise their availability to the grid when not in use. The last characteristic is the most fundamental to the grid described in this paper. A well-known example of such an "ad hoc" grid is the SETI@home project [2] of the University of California at Berkeley, which allows any person in the world with a computer and an Internet connection to donate unused processor time for analyzing radio telescope data. 1.5 Comparing the Grid and Cluster A computer grid expands the capabilities of the cluster by loosing its spatial bounds, so that any computer accessible through the network gains the potential to augment the grid. A fundamental grid feature is that it scales well. The processing power of any machine added to the grid is immediately availably for solving problems. In addition, the machines on the grid can be generalpurpose workstations, which keep down the cost of expanding the grid. 2. ASSESSING THE NEED FOR GRID COMPUTING Effective use of a grid requires a computation that can be divided into independent (i.e., parallel) tasks. The results of each task cannot depend on the results of any other task, and so the members of the grid can solve the tasks in parallel. Once the tasks have been completed, the results can be assembled into the solution. Examples of parallelizable computations are the Mandelbrot set of fractals, the Monte Carlo calculations used in disciplines such as Solid State Physics, and the individual frames of a rendered animation. This paper is concerned with the last example. 2.1 Applications Appropriate for Grid Computing The applications used in grid computing must either be specifically designed for grid use, or scriptable in such a way that they can receive data from the grid, process the data, and then return results. In other words, the best candidates for grid computing are applications that run the same or very similar computations on a large number of pieces of data without any dependencies on the previous calculated results. Applications heavily dependent on data handling rather than processing power are generally more suitable to run on a traditional environment than on a grid platform. Of course, the applications must also run on the computing platform that hosts the grid. Our interest is in using the Alias Maya application [3] with Apple's Xgrid [4] on Mac OS X. Commercial applications usually have strict license requirements. This is an important concern if we install a commercial application such as Maya on all members of our grid. By its nature, the size of the grid may change as the number of idle computers changes. How many licenses will be required? Our resolution of this issue will be discussed in a later section. 2.2 Integration into the Existing Infrastructure The grid requires a controller that recognizes when grid members are available, and parses out job to available members. The controller must be able to see members on the network. This does not require that members be on the same subnet as the controller, but if they are not, any intervening firewalls and routers must be configured to allow grid traffic. 3. XGRID Xgrid is Apple's grid implementation. It was inspired by Zilla, a desktop clustering application developed by NeXT and acquired by Apple. In this report we describe the Xgrid Technology Preview 2, a free download that requires Mac OS X 10.2.8 or later and a minimum 128 MB RAM [5]. Xgrid, leverages Apple's traditional ease of use and configuration. If the grid members are on the same subnet, by default Xgrid automatically discovers available resources through Rendezvous [6]. Tasks are submitted to the grid through a GUI interface or by the command line. A System Preference Pane controls when each computer is available to the grid. It may be best to view Xgrid as a facilitator. The Xgrid architecture handles software and data distribution, job execution, and result aggregation. However, Xgrid does not perform the actual calculations. 3.1 Xgrid Components Xgrid has three major components: the client, controller, and the agent. Each component is included in the default installation, and any computer can easily be configured to assume any role. In fact, for testing purposes, a computer can simultaneously assume all roles in "local mode." The more typical production use is called "cluster mode." The client submits jobs to the controller through the Xgrid GUI or command line. The client defines how the job will be broken into tasks for the grid. If any files or executables must be sent as part of a job, they must reside on the client or at a location accessible to the client. When a job is complete, the client can retrieve the results from the controller. A client can only connect to a single controller at a time. The controller runs the GridServer process. It queues tasks received from clients, distributes those tasks to the agents, and handles failover if an agent cannot complete a task. In Xgrid Technology Preview 2, a controller can handle a maximum of 10,000 agent connections. Only one controller can exist per logical grid. The agents run the GridAgent process. When the GridAgent process starts, it registers with a controller; an agent can only be connected to one controller at a time. Agents receive tasks from their controller, perform the specified computations, and then send the results back to the controller. An agent can be configured to always accept tasks, or to just accept them when the computer is not otherwise busy. 3.2 Security and Authentication By default, Xgrid requires two passwords. First, a client needs a password to access a controller. Second, the controller needs a password to access an agent. Either password requirement can be disabled. Xgrid uses two-way-random mutual authentication protocol with MD5 hashes. At this time, data encryption is only used for passwords. As mentioned earlier, an agent registers with a controller when the GridAgent process starts. There is no native method for the controller to reject agents, and so it must accept any agent that registers. This means that any agent could submit a job that consumes excessive processor and disk space on the agents. Of course, since Mac OS X is a BSD-based operating system, the controller could employ Unix methods of restricting network connections from agents. The Xgrid daemons run as the user "nobody," which means the daemons can read, write, or execute any file according to world permissions. Thus, Xgrid jobs can execute many commands and write to / tmp and / Volumes. In general, this is not a major security risk, but is does require a level of trust between all members of the grid. 3.3 Using Xgrid 3.3.1 Installation Basic Xgrid installation and configuration is described both in Apple documentation [5] and online at the University of Utah web site [8]. The installation is straightforward and offers no options for customization. This means that every computer on which Xgrid is installed has the potential to be a client, controller, or agent. 3.3.2 Agent and Controller Configuration The agents and controllers can be configured through the Xgrid Preference Pane in the System Preferences or XML files in / Library/Preferences. Here the GridServer and GridAgent processes are started, passwords set, and the controller discovery method used by agents is selected. By default, agents use Rendezvous to find a controller, although the agents can also be configured to look for a specific host. The Xgrid Preference Pane also sets whether the Agents will always accept jobs, or only accept jobs when idle. In Xgrid terms, idle either means that the Xgrid screen saver has activated, or the mouse and keyboard have not been used for more than 15 minutes. Even if the agent is configured to always accept tasks, if the computer is being used these tasks will run in the background at a low priority. However, if an agent only accepts jobs when idle, any unfinished task being performed when the computer ceases being idle are immediately stopped and any intermediary results lost. Then the controller assigns the task to another available member of the grid. Advertising the controller via Rendezvous can be disabled by editing / Library/Preferences/com. apple.xgrid.controller.plist. This, however, will not prevent an agent from connecting to the controller by hostname. 3.3.3 Sending Jobs from an Xgrid Client The client sends jobs to the controller either through the Xgrid GUI or the command line. The Xgrid GUI submits jobs via small applications called plug-ins. Sample plug-ins are provided by Apple, but they are only useful as simple testing or as examples of how to create a custom plug-in. If we are to employ Xgrid for useful work, we will require a custom plug-in. James Reynolds details the creation of custom plug-ins on the University of Utah Mac OS web site [8]. Xgrid stores plug-ins in / Library/Xgrid/Plug-ins or ~ / Library/Xgrid/Plug-ins, depending on whether the plug-in was installed with Xgrid or created by a user. The core plug-in parameter is the "command," which includes the executable the agents will run. Another important parameter is the "working directory." This directory contains necessary files that are not installed on the agents or available to them over a network. The working directory will always be copied to each agent, so it is best to keep this directory small. If the files are installed on the agents or available over a network, the working directory parameter is not needed. The command line allows the options available with the GUI plug-in, but it can be slightly more cumbersome. However, the command line probably will be the method of choice for serious work. The command arguments must be included in a script unless they are very basic. This can be a shell, perl, or python script, as long as the agent can interpret it. 3.3.4 Running the Xgrid Job When the Xgrid job is started, the command tells the controller how to break the job into tasks for the agents. Then the command is tarred and gzipped and sent to each agent; if there is a working directory, this is also tarred and gzipped and sent to the agents. The agents extract these files into / tmp and run the task. Recall that since the GridAgent process runs as the user nobody, everything associated with the command must be available to nobody. Executables called by the command should be installed on the agents unless they are very simple. If the executable depends on libraries or other files, it may not function properly if transferred, even if the dependent files are referenced in the working directory. When the task is complete, the results are available to the client. In principle, the results are sent to the client, but whether this actually happens depends on the command. If the results are not sent to the client, they will be in / tmp on each agent. When available, a better solution is to direct the results to a network volume accessible to the client. 3.4 Limitations and Idiosyncrasies Since Xgrid is only in its second preview release, there are some rough edges and limitations. Apple acknowledges some limitations [7]. For example, the controller cannot determine whether an agent is trustworthy and the controller always copies the command and working directory to the agent without checking to see if these exist on the agent. Other limitations are likely just a by-product of an unfinished work. Neither the client nor controller can specify which agents will receive the tasks, which is particularly important if the agents contain a variety of processor types and speeds and the user wants to optimize the calculations. At this time, the best solution to this problem may be to divide the computers into multiple logical grids. There is also no standard way to monitor the progress of a running job on each agent. The Xgrid GUI and command line indicate which agents are working on tasks, but gives no indication of progress. Finally, at this time only Mac OS X clients can submit jobs to the grid. The framework exists to allow third parties to write plug-ins for other Unix flavors, but Apple has not created them. 4. XGRID IMPLEMENTATION Our goal is an Xgrid render farm for Alias Maya. The Ringling School has about 400 Apple Power Mac G4's and G5's in 13 computer labs. The computers range from 733 MHz singleprocessor G4's and 500 MHz and 1 GHz dual-processor G4's to 1.8 GHz dual-processor G5's. All of these computers are lightly used in the evening and on weekends and represent an enormous processing resource for our student rendering projects. 4.1 Software Installation During our Xgrid testing, we loaded software on each computer multiple times, including the operating systems. We saved time by facilitating our installations with the remote administration daemon (radmind) software developed at the University of Michigan [9], [10]. Everything we installed for testing was first created as a radmind base load or overload. Thus, Mac OS X, Mac OS X Developer Tools, Xgrid, POV-Ray [11], and Alias Maya were stored on a radmind server and then installed on our test computers when needed. 4.2 Initial Testing We used six 1.8 GHz dual-processor Apple Power Mac G5's for our Xgrid tests. Each computer ran Mac OS X 10.3.3 and contained 1 GB RAM. As shown in Figure 1, one computer served as both client and controller, while the other five acted as agents. Before attempting Maya rendering with Xgrid, we performed basic calculations to cement our understanding of Xgrid. Apple's Xgrid documentation is sparse, so finding helpful web sites facilitated our learning. Figure 1. Xgrid test grid. We first ran the Mandelbrot set plug-in provided by Apple, which allowed us to test the basic functionality of our grid. Then we performed benchmark rendering with the Open Source Application POV-Ray, as described by Daniel Côté [12] and James Reynolds [8]. Our results showed that one dual-processor G5 rendering the benchmark POV-Ray image took 104 minutes. Breaking the image into three equal parts and using Xgrid to send the parts to three agents required 47 minutes. However, two agents finished their rendering in 30 minutes, while the third agent used 47 minutes; the entire render was only as fast as the slowest agent. These results gave us two important pieces of information. First, the much longer rendering time for one of the tasks indicated that we should be careful how we split jobs into tasks for the agents. All portions of the rendering will not take equal amounts of time, even if the pixel size is the same. Second, since POV-Ray cannot take advantage of both processors in a G5, neither can an Xgrid task running POV-Ray. Alias Maya does not have this limitation. 4.3 Rendering with Alias Maya 6 We first installed Alias Maya 6 for Mac OS X on the client/controller and each agent. Maya 6 requires licenses for use as a workstation application. However, if it is just used for rendering from the command line or a script, no license is needed. We thus created a minimal installation of Maya as a radmind overload. The application was installed in a "hidden" directory inside / Applications. This was done so that normal users of the workstations would not find and attempt to run Maya, which would fail because these installations are not licensed for such use. In addition, Maya requires the existence of a directory ending in the path / maya. The directory must be readable and writable by the Maya user. For a user running Maya on a Mac OS X workstation, the path would usually be ~ / Documents/maya. Unless otherwise specified, this directory will be the default location for Maya data and output files. If the directory does not exist, Maya will try to create it, even if the user specifies that the data and output files exist in other locations. However, Xgrid runs as the user nobody, which does not have a home directory. Maya is unable to create the needed directory, and looks instead for / Alias/maya. This directory also does not exist, and the user nobody has insufficient rights to create it. Our solution was to manually create / Alias/maya and give the user nobody read and write permissions. We also created a network volume for storage of both the rendering data and the resulting rendered frames. This avoided sending the Maya files and associated textures to each agent as part of a working directory. Such a solution worked well for us because our computers are geographically close on a LAN; if greater distance had separated the agents from the client/controller, specifying a working directory may have been a better solution. Finally, we created a custom GUI plug-in for Xgrid. The plug-in command calls a Perl script with three arguments. Two arguments specify the beginning and end frames of the render and the third argument the number of frames in each job (which we call the "cluster size"). The script then calculates the total number of jobs and parses them out to the agents. For example, if we begin at frame 201 and end at frame 225, with 5 frames for each job, the plug-in will create 5 jobs and send them out to the agents. Once the jobs are sent to the agents, the script executes the / usr/sbin/Render command on each agent with the parameters appropriate for the particular job. The results are sent to the network volume. With the setup described, we were able to render with Alias Maya 6 on our test grid. Rendering speed was not important at this time; our first goal was to implement the grid, and in that we succeeded. 4.3.1 Pseudo Code for Perl Script in Custom Xgrid Plug-in In this section we summarize in simplified pseudo code format the Perl script used in our Xgrig plug-in. agent_jobs { • Read beginning frame, end frame, and cluster size of render. • Check whether the render can be divided into an integer number of jobs based on the cluster size. • If there are not an integer number of jobs, reduce the cluster size of the last job and set its last frame to the end frame of the render. • Determine the start frame and end frame for each job. • Execute the Render command.} 4.4 Lessons Learned Rendering with Maya from the Xgrid GUI was not trivial. The lack of Xgrid documentation and the requirements of Maya combined into a confusing picture, where it was difficult to decide the true cause of the problems we encountered. Trial and error was required to determine the best way to set up our grid. The first hurdle was creating the directory / Alias/maya with read and write permissions for the user nobody. The second hurdle was learning that we got the best performance by storing the rendering data on a network volume. The last major hurdle was retrieving our results from the agents. Unlike the POV-Ray rendering tests, our initial Maya results were never returned to the client; instead, Maya stored the results in / tmp on each agent. Specifying in the plug-in where to send the results would not change this behavior. We decided this was likely a Maya issue rather than an Xgrid issue, and the solution was to send the results to the network volume via the Perl script. 5. FUTURE PLANS Maya on Xgrid is not yet ready to be used by the students of Ringling School. In order to do this, we must address at least the following concerns. • Continue our rendering tests through the command line rather than the GUI plug-in. This will be essential for the following step. • Develop an appropriate interface for users to send jobs to the Xgrid controller. This will probably be an extension to the web interface of our existing render farm, where the student specifies parameters that are placed in a script that issues the Render command. • Perform timed Maya rendering tests with Xgrid. Part of this should compare the rendering times for Power Mac G4's and G5's. 6. CONCLUSION Grid computing continues to advance. Recently, the IT industry has witnessed the emergence of numerous types of contemporary grid applications in addition to the traditional grid framework for compute intensive applications. For instance, peer-to-peer applications such as Kazaa, are based on storage grids that do not share processing power but instead an elegant protocol to swap files between systems. Although in our campuses we discourage students from utilizing peer-to-peer applications from music sharing, the same protocol can be utilized on applications such as decision support and data mining. The National Virtual Collaboratory grid project [13] will link earthquake researchers across the U.S. with computing resources, allowing them to share extremely large data sets, research equipment, and work together as virtual teams over the Internet. There is an assortment of new grid players in the IT world expanding the grid computing model and advancing the grid technology to the next level. SAP [14] is piloting a project to grid-enable SAP ERP applications, Dell [15] has partnered with Platform Computing to consolidate computing resources and provide grid-enabled systems for compute intensive applications, Oracle has integrated support for grid computing in their 10g release [16], United Devices [17] offers hosting service for gridon-demand, and Sun Microsystems continues their research and development of Sun's N1 Grid engine [18] which combines grid and clustering platforms. Simply, the grid computing is up and coming. The potential benefits of grid computing are colossal in higher education learning while the implementation costs are low. Today, it would be difficult to identify an application with as high a return on investment as grid computing in information technology divisions in higher education institutions. It is a mistake to overlook this technology with such a high payback.
A Holistic Approach to High-Performance Computing: Xgrid Experience ABSTRACT The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design. With a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education. We have found that Mac OS X is the best operating system to train future artists and designers. Moreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations. As visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows. In our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use. In order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing. As with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid. Therefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues. In our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support. Furthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process. 1. INTRODUCTION Grid computing does not have a single, universally accepted definition. The technology behind grid computing model is not new. Its roots lie in early distributed computing models that date back to early 1980s, where scientists harnessed the computing power of idle workstations to let compute intensive applications to run on multiple workstations, which dramatically shortening processing times. Although numerous distributed computing models were available for discipline-specific scientific applications, only recently have the tools became available to use general-purpose applications on a grid. Consequently, the grid computing model is gaining popularity and has become a show piece of "utility computing". Since in the IT industry, various computing models are used interchangeably with grid computing, we first sort out the similarities and difference between these computing models so that grid computing can be placed in perspective. 1.1 Clustering A cluster is a group of machines in a fixed configuration united to operate and be managed as a single entity to increase robustness and performance. The cluster appears as a single high-speed system or a single highly available system. In this model, resources cannot enter and leave the group as necessary. There are at least two types of clusters: parallel clusters and highavailability clusters. Clustered machines are generally in spatial proximity, such as in the same server room, and dedicated solely to their task. In a high-availability cluster, each machine provides the same service. If one machine fails, another seamlessly takes over its workload. For example, each computer could be a web server for a web site. Should one web server "die," another provides the service, so that the web site rarely, if ever, goes down. A parallel cluster is a type of supercomputer. Problems are split into many parts, and individual cluster members are given part of the problem to solve. An example of a parallel cluster is composed of Apple Power Mac G5 computers at Virginia Tech University [1]. 1.2 Distributed Computing Distributed computing spatially expands network services so that the components providing the services are separated. The major objective of this computing model is to consolidate processing power over a network. A simple example is spreading services such as file and print serving, web serving, and data storage across multiple machines rather than a single machine handling all the tasks. Distributed computing can also be more fine-grained, where even a single application is broken into parts and each part located on different machines: a word processor on one server, a spell checker on a second server, etc. . 1.3 Utility Computing Literally, utility computing resembles common utilities such as telephone or electric service. A service provider makes computing resources and infrastructure management available to a customer as needed, and charges for usage rather than a flat rate. The important thing to note is that resources are only used as needed, and not dedicated to a single customer. 1.4 Grid Computing Grid computing contains aspects of clusters, distributed computing, and utility computing. In the most basic sense, grid turns a group of heterogeneous systems into a centrally managed but flexible computing environment that can work on tasks too time intensive for the individual systems. The grid members are not necessarily in proximity, but must merely be accessible over a network; the grid can access computers on a LAN, WAN, or anywhere in the world via the Internet. In addition, the computers comprising the grid need not be dedicated to the grid; rather, they can function as normal workstations, and then advertise their availability to the grid when not in use. The last characteristic is the most fundamental to the grid described in this paper. A well-known example of such an "ad hoc" grid is the SETI@home project [2] of the University of California at Berkeley, which allows any person in the world with a computer and an Internet connection to donate unused processor time for analyzing radio telescope data. 1.5 Comparing the Grid and Cluster A computer grid expands the capabilities of the cluster by loosing its spatial bounds, so that any computer accessible through the network gains the potential to augment the grid. A fundamental grid feature is that it scales well. The processing power of any machine added to the grid is immediately availably for solving problems. In addition, the machines on the grid can be generalpurpose workstations, which keep down the cost of expanding the grid. 2. ASSESSING THE NEED FOR GRID COMPUTING 2.1 Applications Appropriate for Grid Computing 2.2 Integration into the Existing Infrastructure 3. XGRID 3.1 Xgrid Components 3.2 Security and Authentication 3.3 Using Xgrid 3.3.1 Installation 3.3.2 Agent and Controller Configuration 3.3.3 Sending Jobs from an Xgrid Client 3.3.4 Running the Xgrid Job 3.4 Limitations and Idiosyncrasies 4. XGRID IMPLEMENTATION 4.1 Software Installation 4.2 Initial Testing 4.3 Rendering with Alias Maya 6 4.3.1 Pseudo Code for Perl Script in Custom Xgrid Plug-in 4.4 Lessons Learned 5. FUTURE PLANS 6. CONCLUSION Grid computing continues to advance. Recently, the IT industry has witnessed the emergence of numerous types of contemporary grid applications in addition to the traditional grid framework for compute intensive applications. For instance, peer-to-peer applications such as Kazaa, are based on storage grids that do not share processing power but instead an elegant protocol to swap files between systems. Although in our campuses we discourage students from utilizing peer-to-peer applications from music sharing, the same protocol can be utilized on applications such as decision support and data mining. The National Virtual Collaboratory grid project [13] will link earthquake researchers across the U.S. with computing resources, allowing them to share extremely large data sets, research equipment, and work together as virtual teams over the Internet. There is an assortment of new grid players in the IT world expanding the grid computing model and advancing the grid technology to the next level. SAP [14] is piloting a project to grid-enable SAP ERP applications, Dell [15] has partnered with Platform Computing to consolidate computing resources and provide grid-enabled systems for compute intensive applications, Oracle has integrated support for grid computing in their 10g release [16], United Devices [17] offers hosting service for gridon-demand, and Sun Microsystems continues their research and development of Sun's N1 Grid engine [18] which combines grid and clustering platforms. Simply, the grid computing is up and coming. The potential benefits of grid computing are colossal in higher education learning while the implementation costs are low. Today, it would be difficult to identify an application with as high a return on investment as grid computing in information technology divisions in higher education institutions. It is a mistake to overlook this technology with such a high payback.
A Holistic Approach to High-Performance Computing: Xgrid Experience ABSTRACT The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design. With a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education. We have found that Mac OS X is the best operating system to train future artists and designers. Moreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations. As visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows. In our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use. In order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing. As with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid. Therefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues. In our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support. Furthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process. 1. INTRODUCTION Grid computing does not have a single, universally accepted definition. The technology behind grid computing model is not new. Its roots lie in early distributed computing models that date back to early 1980s, where scientists harnessed the computing power of idle workstations to let compute intensive applications to run on multiple workstations, which dramatically shortening processing times. Although numerous distributed computing models were available for discipline-specific scientific applications, only recently have the tools became available to use general-purpose applications on a grid. Consequently, the grid computing model is gaining popularity and has become a show piece of "utility computing". Since in the IT industry, various computing models are used interchangeably with grid computing, we first sort out the similarities and difference between these computing models so that grid computing can be placed in perspective. 1.1 Clustering A cluster is a group of machines in a fixed configuration united to operate and be managed as a single entity to increase robustness and performance. The cluster appears as a single high-speed system or a single highly available system. In this model, resources cannot enter and leave the group as necessary. There are at least two types of clusters: parallel clusters and highavailability clusters. Clustered machines are generally in spatial proximity, such as in the same server room, and dedicated solely to their task. In a high-availability cluster, each machine provides the same service. If one machine fails, another seamlessly takes over its workload. For example, each computer could be a web server for a web site. A parallel cluster is a type of supercomputer. Problems are split into many parts, and individual cluster members are given part of the problem to solve. An example of a parallel cluster is composed of Apple Power Mac G5 computers at Virginia Tech University [1]. 1.2 Distributed Computing Distributed computing spatially expands network services so that the components providing the services are separated. The major objective of this computing model is to consolidate processing power over a network. A simple example is spreading services such as file and print serving, web serving, and data storage across multiple machines rather than a single machine handling all the tasks. 1.3 Utility Computing Literally, utility computing resembles common utilities such as telephone or electric service. The important thing to note is that resources are only used as needed, and not dedicated to a single customer. 1.4 Grid Computing Grid computing contains aspects of clusters, distributed computing, and utility computing. In the most basic sense, grid turns a group of heterogeneous systems into a centrally managed but flexible computing environment that can work on tasks too time intensive for the individual systems. The grid members are not necessarily in proximity, but must merely be accessible over a network; the grid can access computers on a LAN, WAN, or anywhere in the world via the Internet. In addition, the computers comprising the grid need not be dedicated to the grid; rather, they can function as normal workstations, and then advertise their availability to the grid when not in use. The last characteristic is the most fundamental to the grid described in this paper. 1.5 Comparing the Grid and Cluster A computer grid expands the capabilities of the cluster by loosing its spatial bounds, so that any computer accessible through the network gains the potential to augment the grid. A fundamental grid feature is that it scales well. The processing power of any machine added to the grid is immediately availably for solving problems. In addition, the machines on the grid can be generalpurpose workstations, which keep down the cost of expanding the grid. 6. CONCLUSION Grid computing continues to advance. Recently, the IT industry has witnessed the emergence of numerous types of contemporary grid applications in addition to the traditional grid framework for compute intensive applications. For instance, peer-to-peer applications such as Kazaa, are based on storage grids that do not share processing power but instead an elegant protocol to swap files between systems. There is an assortment of new grid players in the IT world expanding the grid computing model and advancing the grid technology to the next level. Simply, the grid computing is up and coming. The potential benefits of grid computing are colossal in higher education learning while the implementation costs are low. Today, it would be difficult to identify an application with as high a return on investment as grid computing in information technology divisions in higher education institutions.
J-57
Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games
We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents. This representation scheme captures characteristics of the interactions among the agents in a natural and concise manner. We also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation. The Shapley value can be computed in time linear in the size of the input. The emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation.
[ "margin contribut", "compact represent scheme", "represent", "coalit game", "agent", "interact", "core", "treewidth", "shaplei valu", "mc-net", "coremembership", "markov random field", "margin diminish return", "coalit game theori" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "U", "U", "U", "M", "M" ]
Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games ∗ Samuel Ieong † Computer Science Department Stanford University Stanford, CA 94305 sieong@stanford.edu Yoav Shoham Computer Science Department Stanford University Stanford, CA 94305 shoham@stanford.edu ABSTRACT We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents. This representation scheme captures characteristics of the interactions among the agents in a natural and concise manner. We also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation. The Shapley value can be computed in time linear in the size of the input. The emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics; F.2 [Analysis of Algorithms and Problem Complexity] General Terms Algorithms, Economics 1. INTRODUCTION Agents can often benefit by coordinating their actions. Coalitional games capture these opportunities of coordination by explicitly modeling the ability of the agents to take joint actions as primitives. As an abstraction, coalitional games assign a payoff to each group of agents in the game. This payoff is intended to reflect the payoff the group of agents can secure for themselves regardless of the actions of the agents not in the group. These choices of primitives are in contrast to those of non-cooperative games, of which agents are modeled independently, and their payoffs depend critically on the actions chosen by the other agents. 1.1 Coalitional Games and E-Commerce Coalitional games have appeared in the context of e-commerce. In [7], Kleinberg et al. use coalitional games to study recommendation systems. In their model, each individual knows about a certain set of items, is interested in learning about all items, and benefits from finding out about them. The payoffs to groups of agents are the total number of distinct items known by its members. Given this coalitional game setting, Kleinberg et al. compute the value of the private information of the agents is worth to the system using the solution concept of the Shapley value (definition can be found in section 2). These values can then be used to determine how much each agent should receive for participating in the system. As another example, consider the economics behind supply chain formation. The increased use of the Internet as a medium for conducting business has decreased the costs for companies to coordinate their actions, and therefore coalitional game is a good model for studying the supply chain problem. Suppose that each manufacturer purchases his raw materials from some set of suppliers, and that the suppliers offer higher discount with more purchases. The decrease in communication costs will let manufacturers find others interested in the same set of suppliers cheaper, and facilitates formation of coalitions to bargain with the suppliers. Depending on the set of suppliers and how much from each supplier each coalition purchases, we can assign payoffs to the coalitions depending on the discount it receives. The resulting game can be analyzed using coalitional game theory, and we can answer questions such as the stability of coalitions, and how to fairly divide the benefits among the participating manufacturers. A similar problem, combinatorial coalition formation, has previously been studied in [8]. 1.2 Evaluation Criteria for Coalitional Game Representation To capture the coalitional games described above and perform computations on them, we must first find a representation for these games. The na¨ıve solution is to enumerate the payoffs to each set of agents, therefore requiring space 193 exponential in the number of agents in the game. For the two applications described, the number of agents in the system can easily exceed a hundred; this na¨ıve approach will not be scalable to such problems. Therefore, it is critical to find good representation schemes for coalitional games. We believe that the quality of a representation scheme should be evaluated by four criteria. Expressivity: the breadth of the class of coalitional games covered by the representation. Conciseness: the space requirement of the representation. Efficiency: the efficiency of the algorithms we can develop for the representation. Simplicity: the ease of use of the representation by users of the system. The ideal representation should be fully expressive, i.e., it should be able to represent any coalitional games, use as little space as possible, have efficient algorithms for computation, and be easy to use. The goal of this paper is to develop a representation scheme that has properties close to the ideal representation. Unfortunately, given that the number of degrees of freedom of coalitional games is O(2n ), not all games can be represented concisely using a single scheme due to information theoretic constraints. For any given class of games, one may be able to develop a representation scheme that is tailored and more compact than a general scheme. For example, for the recommendation system game, a highly compact representation would be one that simply states which agents know of which products, and let the algorithms that operate on the representation to compute the values of coalitions appropriately. For some problems, however, there may not be efficient algorithms for customized representations. By having a general representation and efficient algorithms that go with it, the representation will be useful as a prototyping tool for studying new economic situations. 1.3 Previous Work The question of coalitional game representation has only been sparsely explored in the past [2, 3, 4]. In [4], Deng and Papadimitriou focused on the complexity of different solution concepts on coalitional games defined on graphs. While the representation is compact, it is not fully expressive. In [2], Conitzer and Sandholm looked into the problem of determining the emptiness of the core in superadditive games. They developed a compact representation scheme for such games, but again the representation is not fully expressive either. In [3], Conitzer and Sandholm developed a fully expressive representation scheme based on decomposition. Our work extends and generalizes the representation schemes in [3, 4] through decomposing the game into a set of rules that assign marginal contributions to groups of agents. We will give a more detailed review of these papers in section 2.2 after covering the technical background. 1.4 Summary of Our Contributions • We develop the marginal contribution networks representation, a fully expressive representation scheme whose size scales according to the complexity of the interactions among the agents. We believe that the representation is also simple and intuitive. • We develop an algorithm for computing the Shapley value of coalitional games under this representation that runs in time linear in the size of the input. • Under the graphical interpretation of the representation, we develop an algorithm for determining the whether a payoff vector is in the core and the emptiness of the core in time exponential only in the treewidth of the graph. 2. PRELIMINARIES In this section, we will briefly review the basics of coalitional game theory and its two primary solution concepts, the Shapley value and the core.1 We will also review previous work on coalitional game representation in more detail. Throughout this paper, we will assume that the payoff to a group of agents can be freely distributed among its members. This assumption is often known as the transferable utility assumption. 2.1 Technical Background We can represent a coalition game with transferable utility by the pair N, v , where • N is the set of agents; and • v : 2N → R is a function that maps each group of agents S ⊆ N to a real-valued payoff. This representation is known as the characteristic form. As there are exponentially many subsets, it will take space exponential in the number of agents to describe a coalitional game. An outcome in a coalitional game specifies the utilities the agents receive. A solution concept assigns to each coalitional game a set of reasonable outcomes. Different solution concepts attempt to capture in some way outcomes that are stable and/or fair. Two of the best known solution concepts are the Shapley value and the core. The Shapley value is a normative solution concept. It prescribes a fair way to divide the gains from cooperation when the grand coalition (i.e., N) is formed. The division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents. Formally, let φi(v) denote the Shapley value of i under characteristic function v, then2 φi(v) = S⊂N s! (n − s − 1)! n! (v(S ∪ {i}) − v(S)) (1) The Shapley value is a solution concept that satisfies many nice properties, and has been studied extensively in the economic and game theoretic literature. It has a very useful axiomatic characterization. Efficiency (EFF) A total of v(N) is distributed to the agents, i.e., i∈N φi(v) = v(N). Symmetry (SYM) If agents i and j are interchangeable, then φi(v) = φj(v). 1 The materials and terminology are based on the textbooks by Mas-Colell et al. [9] and Osborne and Rubinstein [11]. 2 As a notational convenience, we will use the lower-case letter to represent the cardinality of a set denoted by the corresponding upper-case letter. 194 Dummy (DUM) If agent i is a dummy player, i.e., his marginal contribution to all groups S are the same, φi(v) = v({i}). Additivity (ADD) For any two coalitional games v and w defined over the same set of agents N, φi(v + w) = φi(v) + φi(w) for all i ∈ N, where the game v + w is defined as (v + w)(S) = v(S) + w(S) for all S ⊆ N. We will refer to these axioms later in our proof of correctness of the algorithm for computing the Shapley value under our representation in section 4. The core is another major solution concept for coalitional games. It is a descriptive solution concept that focuses on outcomes that are stable. Stability under core means that no set of players can jointly deviate to improve their payoffs. Formally, let x(S) denote i∈S xi. An outcome x ∈ Rn is in the core if ∀S ⊆ N x(S) ≥ v(S) (2) The core was one of the first proposed solution concepts for coalitional games, and had been studied in detail. An important question for a given coalitional game is whether the core is empty. In other words, whether there is any outcome that is stable relative to group deviation. For a game to have a non-empty core, it must satisfy the property of balancedness, defined as follows. Let 1S ∈ Rn denote the characteristic vector of S given by (1S)i = 1 if i ∈ S 0 otherwise Let (λS)S⊆N be a set of weights such that each λS is in the range between 0 and 1. This set of weights, (λS)S⊆N , is a balanced collection if for all i ∈ N, S⊆N λS(1S)i = 1 A game is balanced if for all balanced collections of weights, S⊆N λSv(S) ≤ v(N) (3) By the Bondereva-Shapley theorem, the core of a coalitional game is non-empty if and only if the game is balanced. Therefore, we can use linear programming to determine whether the core of a game is empty. maximize λ∈R2n S⊆N λSv(S) subject to S⊆N λS1S = 1 ∀i ∈ N λS ≥ 0 ∀S ⊆ N (4) If the optimal value of (4) is greater than the value of the grand coalition, then the core is empty. Unfortunately, this program has an exponential number of variables in the number of players in the game, and hence an algorithm that operates directly on this program would be infeasible in practice. In section 5.4, we will describe an algorithm that answers the question of emptiness of core that works on the dual of this program instead. 2.2 Previous Work Revisited Deng and Papadimitriou looked into the complexity of various solution concepts on coalitional games played on weighted graphs in [4]. In their representation, the set of agents are the nodes of the graph, and the value of a set of agents S is the sum of the weights of the edges spanned by them. Notice that this representation is concise since the space required to specify such a game is O(n2 ). However, this representation is not general; it will not be able to represent interactions among three or more agents. For example, it will not be able to represent the majority game, where a group of agents S will have value of 1 if and only if s > n/2. On the other hand, there is an efficient algorithm for computing the Shapley value of the game, and for determining whether the core is empty under the restriction of positive edge weights. However, in the unrestricted case, determining whether the core is non-empty is coNP-complete. Conitzer and Sandholm in [2] considered coalitional games that are superadditive. They described a concise representation scheme that only states the value of a coalition if the value is strictly superadditive. More precisely, the semantics of the representation is that for a group of agents S, v(S) = max {T1,T2,...,Tn}∈Π i v(Ti) where Π is the set of all possible partitions of S. The value v(S) is only explicitly specified for S if v(S) is greater than all partitioning of S other than the trivial partition ({S}). While this representation can represent all games that are superadditive, there are coalitional games that it cannot represent. For example, it will not be able to represent any games with substitutability among the agents. An example of a game that cannot be represented is the unit game, where v(S) = 1 as long as S = ∅. Under this representation, the authors showed that determining whether the core is non-empty is coNP-complete. In fact, even determining the value of a group of agents is NP-complete. In a more recent paper, Conitzer and Sandholm described a representation that decomposes a coalitional game into a number of subgames whose sum add up to the original game [3]. The payoffs in these subgames are then represented by their respective characteristic functions. This scheme is fully general as the characteristic form is a special case of this representation. For any given game, there may be multiple ways to decompose the game, and the decomposition may influence the computational complexity. For computing the Shapley value, the authors showed that the complexity is linear in the input description; in particular, if the largest subgame (as measured by number of agents) is of size n and the number of subgames is m, then their algorithm runs in O(m2n ) time, where the input size will also be O(m2n ). On the other hand, the problem of determining whether a certain outcome is in the core is coNP-complete. 3. MARGINAL CONTRIBUTION NETS In this section, we will describe the Marginal Contribution Networks representation scheme. We will show that the idea is flexible, and we can easily extend it to increase its conciseness. We will also show how we can use this scheme to represent the recommendation game from the introduction. Finally, we will show that this scheme is fully expressive, and generalizes the representation schemes in [3, 4]. 3.1 Rules and MarginalContributionNetworks The basic idea behind marginal contribution networks (MC-nets) is to represent coalitional games using sets of rules. The rules in MC-nets have the following syntactic 195 form: Pattern → value A rule is said to apply to a group of agents S if S meets the requirement of the Pattern. In the basic scheme, these patterns are conjunctions of agents, and S meets the requirement of the given pattern if S is a superset of it. The value of a group of agents is defined to be the sum over the values of all rules that apply to the group. For example, if the set of rules are {a ∧ b} → 5 {b} → 2 then v({a}) = 0, v({b}) = 2, and v({a, b}) = 5 + 2 = 7. MC-nets is a very flexible representation scheme, and can be extended in different ways. One simple way to extend it and increase its conciseness is to allow a wider class of patterns in the rules. A pattern that we will use throughout the remainder of the paper is one that applies only in the absence of certain agents. This is useful for expressing concepts such as substitutability or default values. Formally, we express such patterns by {p1 ∧ p2 ∧ ... ∧ pm ∧ ¬n1 ∧ ¬n2 ∧ ... ∧ ¬nn} which has the semantics that such rule will apply to a group S only if {pi}m i=1 ∈ S and {nj}n j=1 /∈ S. We will call the {pi}m i=1 in the above pattern the positive literals, and {nj}n j=1 the negative literals. Note that if the pattern of a rule consists solely of negative literals, we will consider that the empty set of agents will also satisfy such pattern, and hence v(∅) may be non-zero in the presence of negative literals. To demonstrate the increase in conciseness of representation, consider the unit game described in section 2.2. To represent such a game without using negative literals, we will need 2n rules for n players: we need a rule of value 1 for each individual agent, a rule of value −1 for each pair of agents to counter the double-counting, a rule of value 1 for each triplet of agents, etc., similar to the inclusion-exclusion principle. On the other hand, using negative literals, we only need n rules: value 1 for the first agent, value 1 for the second agent in the absence of the first agent, value 1 for the third agent in the absence of the first two agents, etc.. The representational savings can be exponential in the number of agents. Given a game represented as a MC-net, we can interpret the set of rules that make up the game as a graph. We call this graph the agent graph. The nodes in the graph will represent the agents in the game, and for each rule in the MCnet, we connect all the agents in the rule together and assign a value to the clique formed by the set of agents. Notice that to accommodate negative literals, we will need to annotate the clique appropriately. This alternative view of MC-nets will be useful in our algorithm for Core-Membership in section 5. We would like to end our discussion of the representation scheme by mentioning a trade-off between the expressiveness of patterns and the space required to represent them. To represent a coalitional game in characteristic form, one would need to specify all 2n − 1 values. There is no overhead on top of that since there is a natural ordering of the groups. For MC-nets, however, specification of the rules requires specifying both the patterns and the values. The patterns, if not represented compactly, may end up overwhelming the savings from having fewer values to specify. The space required for the patterns also leads to a tradeoff between the expressiveness of the allowed patterns and the simplicity of representing them. However, we believe that for most naturally arising games, there should be sufficient structure in the problem such that our representation achieves a net saving over the characteristic form. 3.2 Example: Recommendation Game As an example, we will use MC-net to represent the recommendation game discussed in the introduction. For each product, as the benefit of knowing about the product will count only once for each group, we need to capture substitutability among the agents. This can be captured by a scaled unit game. Suppose the value of the knowledge about product i is vi, and there are ni agents, denoted by {xj i }, who know about the product, the game for product i can then be represented as the following rules: {x1 i } → vi {x2 i ∧ ¬x1 i } → vi ... {xni i ∧ ¬xni−1 i ∧ · · · ∧ ¬x1 i } → vi The entire game can then be built up from the sets of rules of each product. The space requirement will be O(mn∗ ), where m is the number of products in the system, and n∗ is the maximum number of agents who knows of the same product. 3.3 Representation Power We will discuss the expressiveness and conciseness of our representation scheme and compare it with the previous works in this subsection. Proposition 1. Marginal contribution networks constitute a fully expressive representation scheme. Proof. Consider an arbitrary coalitional game N, v in characteristic form representation. We can construct a set of rules to describe this game by starting from the singleton sets and building up the set of rules. For any singleton set {i}, we create a rule {i} → v(i). For any pair of agents {i, j}, we create a rule {i ∧ j} → v({i, j}) − v({i}) − v({j}. We can continue to build up rules in a manner similar to the inclusion-exclusion principle. Since the game is arbitrary, MC-nets are fully expressive. Using the construction outlined in the proof, we can show that our representation scheme can simulate the multi-issue representation scheme of [3] in almost the same amount of space. Proposition 2. Marginal contribution networks use at most a linear factor (in the number of agents) more space than multi-issue representation for any game. Proof. Given a game in multi-issue representation, we start by describing each of the subgames, which are represented in characteristic form in [3], with a set of rules. 196 We then build up the grand game by including all the rules from the subgames. Note that our representation may require a space larger by a linear factor due to the need to describe the patterns for each rule. On the other hand, our approach may have fewer than exponential number of rules for each subgame, depending on the structure of these subgames, and therefore may be more concise than multi-issue representation. On the other hand, there are games that require exponentially more space to represent under the multi-issue scheme compared to our scheme. Proposition 3. Marginal contribution networks are exponentially more concise than multi-issue representation for certain games. Proof. Consider a unit game over all the agents N. As explained in 3.1, this game can be represented in linear space using MC-nets with negative literals. However, as there is no decomposition of this game into smaller subgames, it will require space O(2n ) to represent this game under the multiissue representation. Under the agent graph interpretation of MC-nets, we can see that MC-nets is a generalization of the graphical representation in [4], namely from weighted graphs to weighted hypergraphs. Proposition 4. Marginal contribution networks can represent any games in graphical form (under [4]) in the same amount of space. Proof. Given a game in graphical form, G, for each edge (i, j) with weight wij in the graph, we create a rule {i, j} → wij. Clearly this takes exactly the same space as the size of G, and by the additive semantics of the rules, it represents the same game as G. 4. COMPUTING THE SHAPLEY VALUE Given a MC-net, we have a simple algorithm to compute the Shapley value of the game. Considering each rule as a separate game, we start by computing the Shapley value of the agents for each rule. For each agent, we then sum up the Shapley values of that agent over all the rules. We first show that this final summing process correctly computes the Shapley value of the agents. Proposition 5. The Shapley value of an agent in a marginal contribution network is equal to the sum of the Shapley values of that agent over each rule. Proof. For any group S, under the MC-nets representation, v(S) is defined to be the sum over the values of all the rules that apply to S. Therefore, considering each rule as a game, by the (ADD) axiom discussed in section 2, the Shapley value of the game created from aggregating all the rules is equal to the sum of the Shapley values over the rules. The remaining question is how to compute the Shapley values of the rules. We can separate the analysis into two cases, one for rules with only positive literals and one for rules with mixed literals. For rules that have only positive literals, the Shapley value of the agents is v/m, where v is the value of the rule and m is the number of agents in the rule. This is a direct consequence of the (SYM) axiom of the Shapley value, as the agents in a rule are indistinguishable from each other. For rules that have both positive and negative literals, we can consider the positive and the negative literals separately. For a given positive literal i, the rule will apply only if i occurs in a given permutation after the rest of the positive literals but before any of the negative literals. Formally, let φi denote the Shapley value of i, p denote the cardinality of the positive set, and n denote the cardinality of the negative set, then φi = (p − 1)! n! (p + n)! v = v p p+n n For a given negative literal j, j will be responsible for cancelling the application of the rule if all positive literals come before the negative literals in the ordering, and j is the first among the negative literals. Therefore, φj = p! (n − 1)! (p + n)! (−v) = −v n p+n p By the (SYM) axiom, all positive literals will have the value of φi and all negative literals will have the value of φj. Note that the sum over all agents in rules with mixed literals is 0. This is to be expected as these rules contribute 0 to the grand coalition. The fact that these rules have no effect on the grand coalition may appear odd at first. But this is because the presence of such rules is to define the values of coalitions smaller than the grand coalition. In terms of computational complexity, given that the Shapley value of any agent in a given rule can be computed in time linear in the pattern of the rule, the total running time of the algorithm for computing the Shapley value of the game is linear in the size of the input. 5. ANSWERING CORE-RELATED QUESTIONS There are a few different but related computational problems associated with the solution concept of the core. We will focus on the following two problems: Definition 1. (Core-Membership) Given a coalitional game and a payoff vector x, determine if x is in the core. Definition 2. (Core-Non-Emptiness) Given a coalitional game, determine if the core is non-empty. In the rest of the section, we will first show that these two problems are coNP-complete and coNP-hard respectively, and discuss some complexity considerations about these problems. We will then review the main ideas of tree decomposition as it will be used extensively in our algorithm for Core-Membership. Next, we will present the algorithm for Core-Membership, and show that the algorithm runs in polynomial time for graphs of bounded treewidth. We end by extending this algorithm to answer the question of CoreNon-Emptiness in polynomial time for graphs of bounded treewidth. 5.1 Computational Complexity The hardness of Core-Membership and Core-NonEmptiness follows directly from the hardness results of games over weighted graphs in [4]. 197 Proposition 6. Core-Membership for games represented as marginal contribution networks is coNP-complete. Proof. Core-Membership in MC-nets is in the class of coNP since any set of agents S of which v(S) > x(S) will serve as a certificate to show that x does not belong to the core. As for its hardness, given any instance of CoreMembership for a game in graphical form of [4], we can encode the game in exactly the same space using MC-net due to Proposition 4. Since Core-Membership for games in graphical form is coNP-complete, Core-Membership in MC-nets is coNP-hard. Proposition 7. Core-Non-Emptiness for games represented as marginal contribution networks is coNP-hard. Proof. The same argument for hardness between games in graphical frm and MC-nets holds for the problem of CoreNon-Emptiness. We do not know of a certificate to show that Core-NonEmptiness is in the class of coNP as of now. Note that the obvious certificate of a balanced set of weights based on the Bondereva-Shapley theorem is exponential in size. In [4], Deng and Papadimitriou showed the coNP-completeness of Core-Non-Emptiness via a combinatorial characterization, namely that the core is non-empty if and only if there is no negative cut in the graph. In MC-nets, however, there need not be a negative hypercut in the graph for the core to be empty, as demonstrated by the following game (N = {1, 2, 3, 4}): v(S) = 1 if S = {1, 2, 3, 4} 3/4 if S = {1, 2}, {1, 3}, {1, 4}, or {2, 3, 4} 0 otherwise (5) Applying the Bondereva-Shapley theorem, if we let λ12 = λ13 = λ14 = 1/3, and λ234 = 2/3, this set of weights demonstrates that the game is not balanced, and hence the core is empty. On the other hand, this game can be represented with MC-nets as follows (weights on hyperedges): w({1, 2}) = w({1, 3}) = w({1, 4}) = 3/4 w({1, 2, 3}) = w({1, 2, 4}) = w({1, 3, 4}) = −6/4 w({2, 3, 4}) = 3/4 w({1, 2, 3, 4}) = 10/4 No matter how the set is partitioned, the sum over the weights of the hyperedges in the cut is always non-negative. To overcome the computational hardness of these problems, we have developed algorithms that are based on tree decomposition techniques. For Core-Membership, our algorithm runs in time exponential only in the treewidth of the agent graph. Thus, for graphs of small treewidth, such as trees, we have a tractable solution to determine if a payoff vector is in the core. By using this procedure as a separation oracle, i.e., a procedure for returning the inequality violated by a candidate solution, to solving a linear program that is related to Core-Non-Emptiness using the ellipsoid method, we can obtain a polynomial time algorithm for Core-Non-Emptiness for graphs of bounded treewidth. 5.2 Review of Tree Decomposition As our algorithm for Core-Membership relies heavily on tree decomposition, we will first briefly review the main ideas in tree decomposition and treewidth.3 Definition 3. A tree decomposition of a graph G = (V, E) is a pair (X, T), where T = (I, F) is a tree and X = {Xi | i ∈ I} is a family of subsets of V , one for each node of T, such that • i∈I Xi = V ; • For all edges (v, w) ∈ E, there exists an i ∈ I with v ∈ Xi and w ∈ Xi; and • (Running Intersection Property) For all i, j, k ∈ I: if j is on the path from i to k in T, then Xi ∩ Xk ⊆ Xj. The treewidth of a tree decomposition is defined as the maximum cardinality over all sets in X, less one. The treewidth of a graph is defined as the minimum treewidth over all tree decompositions of the graph. Given a tree decomposition, we can convert it into a nice tree decomposition of the same treewidth, and of size linear in that of T. Definition 4. A tree decomposition T is nice if T is rooted and has four types of nodes: Leaf nodes i are leaves of T with |Xi| = 1. Introduce nodes i have one child j such that Xi = Xj ∪ {v} of some v ∈ V . Forget nodes i have one child j such that Xi = Xj \ {v} for some v ∈ Xj. Join nodes i have two children j and k with Xi = Xj = Xk. An example of a (partial) nice tree decomposition together with a classification of the different types of nodes is in Figure 1. In the following section, we will refer to nodes in the tree decomposition as nodes, and nodes in the agent graph as agents. 5.3 Algorithm for Core Membership Our algorithm for Core-Membership takes as an input a nice tree decomposition T of the agent graph and a payoff vector x. By definition, if x belongs to the core, then for all groups S ⊆ N, x(S) ≥ v(S). Therefore, the difference x(S)−v(S) measures how close the group S is to violating the core condition. We call this difference the excess of group S. Definition 5. The excess of a coalition S, e(S), is defined as x(S) − v(S). A brute-force approach to determine if a payoff vector belongs to the core will have to check that the excesses of all groups are non-negative. However, this approach ignores the structure in the agent graph that will allow an algorithm to infer that certain groups have non-negative excesses due to 3 This is based largely on the materials from a survey paper by Bodlaender [1]. 198 i j k l nm Introduce Node: Xj = {1, 4} Xk = {1, 4} Forget Node: Xl = {1, 4} Introduce Node: Xm = {1, 2, 4} Xn = {4} Leaf Node: Join Node: Xi = {1, 3, 4} Join Node: Figure 1: Example of a (partial) nice tree decomposition the excesses computed elsewhere in the graph. Tree decomposition is the key to take advantage of such inferences in a structured way. For now, let us focus on rules with positive literals. Suppose we have already checked that the excesses of all sets R ⊆ U are non-negative, and we would like to check if the addition of an agent i to the set U will create a group with negative excess. A na¨ıve solution will be to compute the excesses of all sets that include i. The excess of the group (R ∪ {i}) for any group R can be computed as follows e(R ∪ {i}) = e(R) + xi − v(c) (6) where c is the cut between R and i, and v(c) is the sum of the weights of the edges in the cut. However, suppose that from the tree decomposition, we know that i is only connected to a subset of U, say S, which we will call the entry set to U. Ideally, because i does not share any edges with members of ¯U = (U \ S), we would hope that an algorithm can take advantage of this structure by checking only sets that are subsets of (S ∪ {i}). This computational saving may be possible since (xi −v(c)) in the update equation of (6) does not depend on ¯U. However, we cannot simply ignore ¯U as members of ¯U may still influence the excesses of groups that include agent i through group S. Specifically, if there exists a group T ⊃ S such that e(T) < e(S), then even when e(S ∪ {i}) has non-negative excess, e(T ∪{i}) may have negative excess. In other words, the excess available at S may have been drained away due to T. This motivates the definition of the reserve of a group. Definition 6. The reserve of a coalition S relative to a coalition U is the minimum excess over all coalitions between S and U, i.e., all T : S ⊆ T ⊆ U. We denote this value by r(S, U). We will refer to the group T that has the minimum excess as arg r(S, U). We will also call U the limiting set of the reserve and S the base set of the reserve. Our algorithm works by keeping track of the reserves of all non-empty subsets that can be formed by the agents of a node at each of the nodes of the tree decomposition. Starting from the leaves of the tree and working towards the root, at each node i, our algorithm computes the reserves of all groups S ⊆ Xi, limited by the set of agents in the subtree rooted at i, Ti, except those in (Xi\S). The agents in (Xi\S) are excluded to ensure that S is an entry set. Specifically, S is the entry set to ((Ti \ Xi) ∪ S). To accomodate for negative literals, we will need to make two adjustments. Firstly, the cut between an agent m and a set S at node i now refers to the cut among agent m, set S, and set ¬(Xi \ S), and its value must be computed accordingly. Also, when an agent m is introduced to a group at an introduce node, we will also need to consider the change in the reserves of groups that do not include m due to possible cut involving ¬m and the group. As an example of the reserve values we keep track of at a tree node, consider node i of the tree in Figure 1. At node i, we will keep track of the following: r({1}, {1, 2, ...}) r({3}, {2, 3, ...}) r({4}, {2, 4, ...}) r({1, 3}, {1, 2, 3, ...}) r({1, 4}, {1, 2, 4, ...}) r({3, 4}, {2, 3, 4, ...}) r({1, 3, 4}, {1, 2, 3, 4, ...} where the dots ... refer to the agents rooted under node m. For notational use, we will use ri(S) to denote r(S, U) at node i where U is the set of agents in the subtree rooted at node i excluding agents in (Xi \ S). We sometimes refer to these values as the r-values of a node. The details of the r-value computations are in Algorithm 1. To determine if the payoff vector x is in the core, during the r-value computation at each node, we can check if all of the r-values are non-negative. If this is so for all nodes in the tree, the payoff vector x is in the core. The correctness of the algorithm is due to the following proposition. Proposition 8. The payoff vector x is not in the core if and only if the r-values at some node i for some group S is negative. Proof. (⇐) If the reserve at some node i for some group S is negative, then there exists a coalition T for which e(T) = x(T) − v(T) < 0, hence x is not in the core. (⇒) Suppose x is not in the core, then there exists some group R∗ such that e(R∗ ) < 0. Let Xroot be the set of nodes at the root. Consider any set S ∈ Xroot, rroot(S) will have the base set of S and the limiting set of ((N \ Xroot) ∪ S). The union over all of these ranges includes all sets U for which U ∩ Xroot = ∅. Therefore, if R∗ is not disjoint from Xroot, the r-value for some group in the root is negative. If R∗ is disjoint from U, consider the forest {Ti} resulting from removal of all tree nodes that include agents in Xroot. 199 Algorithm 1 Subprocedures for Core Membership Leaf-Node(i) 1: ri(Xi) ← e(Xi) Introduce-Node(i) 2: j ← child of i 3: m ← Xi \ Xj {the introduced node} 4: for all S ⊆ Xj, S = ∅ do 5: C ← all hyperedges in the cut of m, S, and ¬(Xi \ S) 6: ri(S ∪ {x}) ← rj(S) + xm − v(C) 7: C ← all hyperedges in the cut of ¬m, S, and ¬(Xi \S) 8: ri(S) ← rj(S) − v(C) 9: end for 10: r({m}) ← e({m}) Forget-Node(i) 11: j ← child of i 12: m ← Xj \ Xi {the forgotten node} 13: for all S ⊆ Xi, S = ∅ do 14: ri(S) = min(rj(S), rj(S ∪ {m})) 15: end for Join-Node(i) 16: {j, k} ← {left, right} child of i 17: for all S ⊆ Xi, S = ∅ do 18: ri(S) ← rj(S) + rk(S) − e(S) 19: end for By the running intersection property, the sets of nodes in the trees Ti``s are disjoint. Thus, if the set R∗ = i Si for some Si ∈ Ti, e(R∗ ) = i e(Si) < 0 implies some group S∗ i has negative excess as well. Therefore, we only need to check the r-values of the nodes on the individual trees in the forest. But for each tree in the forest, we can apply the same argument restricted to the agents in the tree. In the base case, we have the leaf nodes of the original tree decomposition, say, for agent i. If R∗ = {i}, then r({i}) = e({i}) < 0. Therefore, by induction, if e(R∗ ) < 0, some reserve at some node would be negative. We will next explain the intuition behind the correctness of the computations for the r-values in the tree nodes. A detailed proof of correctness of these computations can be found in the appendix under Lemmas 1 and 2. Proposition 9. The procedure in Algorithm 1 correctly compute the r-values at each of the tree nodes. Proof. (Sketch) We can perform a case analysis over the four types of tree nodes in a nice tree decomposition. Leaf nodes (i) The only reserve value to be computed is ri(Xi), which equals r(Xi, Xi), and therefore it is just the excess of group Xi. Forget nodes (i with child j) Let m be the forgotten node. For any subset S ⊆ Xi, arg ri(S) must be chosen between the groups of S and S ∪ {m}, and hence we choose between the lower of the two from the r-values at node j. Introduce nodes (i with child j) Let m be the introduced node. For any subset T ⊆ Xi that includes m, let S denote (T \ {m}). By the running intersection property, there are no rules that involve m and agents of the subtree rooted at node i except those involving m and agents in Xi. As both the base set and the limiting set of the r-values of node j and node i differ by {m}, for any group V that lies between the base set and the limiting set of node i, the excess of group V will differ by a constant amount from the corresponding group (V \ {m}) at node j. Therefore, the set arg ri(T) equals the set arg rj(S) ∪ {m}, and ri(T) = rj(S) + xm − v(cut), where v(cut) is the value of the rules in the cut between m and S. For any subset S ⊂ Xi that does not include m, we need to consider the values of rules that include ¬m as a literal in the pattern. Also, when computing the reserve, the payoff xm will not contribute to group S. Therefore, together with the running intersection property as argued above, we can show that ri(S) = rj(S) − v(cut). Join nodes (i with left child j and right child k) For any given set S ⊆ Xi, consider the r-values of that set at j and k. If arg rj(S) or arg rk(S) includes agents not in S, then argrj(S) and argrk(S) will be disjoint from each other due to the running intersection property. Therefore, we can decompose arg ri(S) into three sets, (arg rj(S) \ S) on the left, S in the middle, and (arg rk(S) \ S) on the right. The reserve rj(S) will cover the excesses on the left and in the middle, whereas the reserve rk(S) will cover those on the right and in the middle, and so the excesses in the middle is double-counted. We adjust for the double-counting by subtracting the excesses in the middle from the sum of the two reserves rj(S) and rk(S). Finally, note that each step in the computation of the rvalues of each node i takes time at most exponential in the size of Xi, hence the algorithm runs in time exponential only in the treewidth of the graph. 5.4 Algorithm for Core Non-emptiness We can extend the algorithm for Core-Membership into an algorithm for Core-Non-Emptiness. As described in section 2, whether the core is empty can be checked using the optimization program based on the balancedness condition (3). Unfortunately, that program has an exponential number of variables. On the other hand, the dual of the program has only n variables, and can be written as follows: minimize x∈Rn n i=1 xi subject to x(S) ≥ v(S), ∀S ⊆ N (7) By strong duality, optimal value of (7) is equal to optimal value of (4), the primal program described in section 2. Therefore, by the Bondereva-Shapley theorem, if the optimal value of (7) is greater than v(N), the core is empty. We can solve the dual program using the ellipsoid method with Core-Membership as a separation oracle, i.e., a procedure for returning a constraint that is violated. Note that a simple extension to the Core-Membership algorithm will allow us to keep track of the set T for which e(T) < 0 during the r-values computation, and hence we can return the inequality about T as the constraint violated. Therefore, Core-Non-Emptiness can run in time polynomial in the running time of Core-Membership, which in turn runs in 200 time exponential only in the treewidth of the graph. Note that when the core is not empty, this program will return an outcome in the core. 6. CONCLUDING REMARKS We have developed a fully expressive representation scheme for coalitional games of which the size depends on the complexity of the interactions among the agents. Our focus on general representation is in contrast to the approach taken in [3, 4]. We have also developed an efficient algorithm for the computation of the Shapley values for this representation. While Core-Membership for MC-nets is coNP-complete, we have developed an algorithm for CoreMembership that runs in time exponential only in the treewidth of the agent graph. We have also extended the algorithm to solve Core-Non-Emptiness. Other than the algorithm for Core-Non-Emptiness in [4] under the restriction of non-negative edge weights, and that in [2] for superadditive games when the value of the grand coalition is given, we are not aware of any explicit description of algorithms for core-related problems in the literature. The work in this paper is related to a number of areas in computer science, especially in artificial intelligence. For example, the graphical interpretation of MC-nets is closely related to Markov random fields (MRFs) of the Bayes nets community. They both address the issue of of conciseness of representation by using the combinatorial structure of weighted hypergraphs. In fact, Kearns et al. first apply these idea to games theory by introducing a representation scheme derived from Bayes net to represent non-cooperative games [6]. The representational issues faced in coalitional games are closely related to the problem of expressing valuations in combinatorial auctions [5, 10]. The OR-bid language, for example, is strongly related to superadditivity. The question of the representation power of different patterns is also related to Boolean expression complexity [12]. We believe that with a better understanding of the relationships among these related areas, we may be able to develop more efficient representations and algorithms for coalitional games. Finally, we would like to end with some ideas for extending the work in this paper. One direction to increase the conciseness of MC-nets is to allow the definition of equivalent classes of agents, similar to the idea of extending Bayes nets to probabilistic relational models. The concept of symmetry is prevalent in games, and the use of classes of agents will allow us to capture symmetry naturally and concisely. This will also address the problem of unpleasing assymetric representations of symmetric games in our representation. Along the line of exploiting symmetry, as the agents within the same class are symmetric with respect to each other, we can extend the idea above by allowing functional description of marginal contributions. More concretely, we can specify the value of a rule as dependent on the number of agents of each relevant class. The use of functions will allow concise description of marginal diminishing returns (MDRs). Without the use of functions, the space needed to describe MDRs among n agents in MC-nets is O(n). With the use of functions, the space required can be reduced to O(1). Another idea to extend MC-nets is to augment the semantics to allow constructs that specify certain rules cannot be applied simultaneously. This is useful in situations where a certain agent represents a type of exhaustible resource, and therefore rules that depend on the presence of the agent should not apply simultaneously. For example, if agent i in the system stands for coal, we can either use it as fuel for a power plant or as input to a steel mill for making steel, but not for both at the same time. Currently, to represent such situations, we have to specify rules to cancel out the effects of applications of different rules. The augmented semantics can simplify the representation by specifying when rules cannot be applied together. 7. ACKNOWLEDGMENT The authors would like to thank Chris Luhrs, Bob McGrew, Eugene Nudelman, and Qixiang Sun for fruitful discussions, and the anonymous reviewers for their helpful comments on the paper. 8. REFERENCES [1] H. L. Bodlaender. Treewidth: Algorithmic techniques and results. In Proc. 22nd Symp. on Mathematical Foundation of Copmuter Science, pages 19-36. Springer-Verlag LNCS 1295, 1997. [2] V. Conitzer and T. Sandholm. Complexity of determining nonemptiness of the core. In Proc. 18th Int. Joint Conf. on Artificial Intelligence, pages 613-618, 2003. [3] V. Conitzer and T. Sandholm. Computing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains. In Proc. 19th Nat. Conf. on Artificial Intelligence, pages 219-225, 2004. [4] X. Deng and C. H. Papadimitriou. On the complexity of cooperative solution concepts. Math. Oper. Res., 19:257-266, May 1994. [5] Y. Fujishima, K. Leyton-Brown, and Y. Shoham. Taming the computational complexity of combinatorial auctions: Optimal and approximate approaches. In Proc. 16th Int. Joint Conf. on Artificial Intelligence, pages 548-553, 1999. [6] M. Kearns, M. L. Littman, and S. Singh. Graphical models for game theory. In Proc. 17th Conf. on Uncertainty in Artificial Intelligence, pages 253-260, 2001. [7] J. Kleinberg, C. H. Papadimitriou, and P. Raghavan. On the value of private information. In Proc. 8th Conf. on Theoretical Aspects of Rationality and Knowledge, pages 249-257, 2001. [8] C. Li and K. Sycara. Algoirthms for combinatorial coalition formation and payoff division in an electronic marketplace. Technical report, Robotics Insititute, Carnegie Mellon University, November 2001. [9] A. Mas-Colell, M. D. Whinston, and J. R. Green. Microeconomic Theory. Oxford University Press, New York, 1995. [10] N. Nisan. Bidding and allocation in combinatorial auctions. In Proc. 2nd ACM Conf. on Electronic Commerce, pages 1-12, 2000. [11] M. J. Osborne and A. Rubinstein. A Course in Game Theory. The MIT Press, Cambridge, Massachusetts, 1994. [12] I. Wegener. The Complexity of Boolean Functions. John Wiley & Sons, New York, October 1987. 201 APPENDIX We will formally show the correctness of the r-value computation in Algorithm 1 of introduce nodes and join nodes. Lemma 1. The procedure for computing the r-values of introduce nodes in Algorithm 1 is correct. Proof. Let node m be the newly introduced agent at i. Let U denote the set of agents in the subtree rooted at i. By the running intersection property, all interactions (the hyperedges) between m and U must be in node i. For all S ⊆ Xi : m ∈ S, let R denote (U \ Xi) ∪ S), and Q denote (R \ {m}). ri(S) = r(S, R) = min T :S⊆T ⊆R e(T) = min T :S⊆T ⊆R x(T) − v(T) = min T :S⊆T ⊆R x(T \ {m}) + xm − v(T \ {m}) − v(cut) = min T :S\{m}⊆T ⊆Q e(T ) + xm − v(cut) = rj(S) + xm − v(cut) The argument for sets S ⊆ Xi : m /∈ S is symmetric except xm will not contribute to the reserve due to the absence of m. Lemma 2. The procedure for computing the r-values of join nodes in Algorithm 1 is correct. Proof. Consider any set S ⊆ Xi. Let Uj denote the subtree rooted at the left child, Rj denote ((Uj \ Xj) ∪ S), and Qj denote (Uj \ Xj). Let Uk, Rk, and Qk be defined analogously for the right child. Let R denote (U \ Xi) ∪ S). ri(S) = r(S, R) = min T :S⊆T ⊆R x(T) − v(T) = min T :S⊆T ⊆R x(S) + x(T ∩ Qj) + x(T ∩ Qk) − v(S) − v(cut(S, T ∩ Qj) − v(cut(S, T ∩ Qk) = min T :S⊆T ⊆R x(T ∩ Qj) − v(cut(S, T ∩ Qj)) + min T :S⊆T ⊆R x(T ∩ Qk) − v(cut(S, T ∩ Qk)) + (x(S) − v(S)) (*) = min T :S⊆T ⊆R x(T ∩ Qj) + x(S) − v(cut(S, T ∩ Qj)) − v(S) + min T :S⊆T ⊆R x(T ∩ Qk) + x(S) − v(cut(S, T ∩ Qk)) − v(S) − (x(S) − v(S)) = min T :S⊆T ⊆R e(T ∩ Rj) + min T :S⊆T ⊆R e(T ∩ Rk) − e(S) = min T :S⊆T ⊆Rj e(T ) + min T :S⊆T ⊆Rk e(T ) − e(S) = rj(S) + rk(S) − e(S) where (*) is true as T ∩ Qj and T ∩ Qk are disjoint due to the running intersection property of tree decomposition, and hence the minimum of the sum can be decomposed into the sum of the minima. 202
Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games * ABSTRACT We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents. This representation scheme captures characteristics of the interactions among the agents in a natural and concise manner. We also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation. The Shapley value can be computed in time linear in the size of the input. The emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation. 1. INTRODUCTION Agents can often benefit by coordinating their actions. Coalitional games capture these opportunities of coordination by explicitly modeling the ability of the agents to take joint actions as primitives. As an abstraction, coalitional games assign a payoff to each group of agents in the game. This payoff is intended to reflect the payoff the group of agents can secure for themselves regardless of the actions of the agents not in the group. These choices of primitives are in contrast to those of non-cooperative games, of which agents are modeled independently, and their payoffs depend critically on the actions chosen by the other agents. 1.1 Coalitional Games and E-Commerce Coalitional games have appeared in the context of e-commerce. In [7], Kleinberg et al. use coalitional games to study recommendation systems. In their model, each individual knows about a certain set of items, is interested in learning about all items, and benefits from finding out about them. The payoffs to groups of agents are the total number of distinct items known by its members. Given this coalitional game setting, Kleinberg et al. compute the value of the private information of the agents is worth to the system using the solution concept of the Shapley value (definition can be found in section 2). These values can then be used to determine how much each agent should receive for participating in the system. As another example, consider the economics behind supply chain formation. The increased use of the Internet as a medium for conducting business has decreased the costs for companies to coordinate their actions, and therefore coalitional game is a good model for studying the supply chain problem. Suppose that each manufacturer purchases his raw materials from some set of suppliers, and that the suppliers offer higher discount with more purchases. The decrease in communication costs will let manufacturers find others interested in the same set of suppliers cheaper, and facilitates formation of coalitions to bargain with the suppliers. Depending on the set of suppliers and how much from each supplier each coalition purchases, we can assign payoffs to the coalitions depending on the discount it receives. The resulting game can be analyzed using coalitional game theory, and we can answer questions such as the stability of coalitions, and how to fairly divide the benefits among the participating manufacturers. A similar problem, combinatorial coalition formation, has previously been studied in [8]. 1.2 Evaluation Criteria for Coalitional Game Representation To capture the coalitional games described above and perform computations on them, we must first find a representation for these games. The na ¨ ıve solution is to enumerate the payoffs to each set of agents, therefore requiring space exponential in the number of agents in the game. For the two applications described, the number of agents in the system can easily exceed a hundred; this na ¨ ıve approach will not be scalable to such problems. Therefore, it is critical to find good representation schemes for coalitional games. We believe that the quality of a representation scheme should be evaluated by four criteria. Expressivity: the breadth of the class of coalitional games covered by the representation. Conciseness: the space requirement of the representation. Efficiency: the efficiency of the algorithms we can develop for the representation. Simplicity: the ease of use of the representation by users of the system. The ideal representation should be fully expressive, i.e., it should be able to represent any coalitional games, use as little space as possible, have efficient algorithms for computation, and be easy to use. The goal of this paper is to develop a representation scheme that has properties close to the ideal representation. Unfortunately, given that the number of degrees of freedom of coalitional games is O (2n), not all games can be represented concisely using a single scheme due to information theoretic constraints. For any given class of games, one may be able to develop a representation scheme that is tailored and more compact than a general scheme. For example, for the recommendation system game, a highly compact representation would be one that simply states which agents know of which products, and let the algorithms that operate on the representation to compute the values of coalitions appropriately. For some problems, however, there may not be efficient algorithms for customized representations. By having a general representation and efficient algorithms that go with it, the representation will be useful as a prototyping tool for studying new economic situations. 1.3 Previous Work The question of coalitional game representation has only been sparsely explored in the past [2, 3, 4]. In [4], Deng and Papadimitriou focused on the complexity of different solution concepts on coalitional games defined on graphs. While the representation is compact, it is not fully expressive. In [2], Conitzer and Sandholm looked into the problem of determining the emptiness of the core in superadditive games. They developed a compact representation scheme for such games, but again the representation is not fully expressive either. In [3], Conitzer and Sandholm developed a fully expressive representation scheme based on decomposition. Our work extends and generalizes the representation schemes in [3, 4] through decomposing the game into a set of rules that assign marginal contributions to groups of agents. We will give a more detailed review of these papers in section 2.2 after covering the technical background. 1.4 Summary of Our Contributions • We develop the marginal contribution networks representation, a fully expressive representation scheme whose size scales according to the complexity of the interactions among the agents. We believe that the representation is also simple and intuitive. • We develop an algorithm for computing the Shapley value of coalitional games under this representation that runs in time linear in the size of the input. • Under the graphical interpretation of the represen tation, we develop an algorithm for determining the whether a payoff vector is in the core and the emptiness of the core in time exponential only in the treewidth of the graph. 2. PRELIMINARIES In this section, we will briefly review the basics of coalitional game theory and its two primary solution concepts, the Shapley value and the core . ' We will also review previous work on coalitional game representation in more detail. Throughout this paper, we will assume that the payoff to a group of agents can be freely distributed among its members. This assumption is often known as the transferable utility assumption. 2.1 Technical Background We can represent a coalition game with transferable utility by the pair (N, v), where • N is the set of agents; and • v: 2N → R is a function that maps each group of agents S ⊆ N to a real-valued payoff. This representation is known as the characteristic form. As there are exponentially many subsets, it will take space exponential in the number of agents to describe a coalitional game. An outcome in a coalitional game specifies the utilities the agents receive. A solution concept assigns to each coalitional game a set of "reasonable" outcomes. Different solution concepts attempt to capture in some way outcomes that are stable and/or fair. Two of the best known solution concepts are the Shapley value and the core. The Shapley value is a normative solution concept. It prescribes a "fair" way to divide the gains from cooperation when the grand coalition (i.e., N) is formed. The division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents. Formally, let φi (v) denote the Shapley value of i under characteristic function v, then2 The Shapley value is a solution concept that satisfies many nice properties, and has been studied extensively in the economic and game theoretic literature. It has a very useful axiomatic characterization. Efficiency (EFF) A total of v (N) is distributed to the agents, i.e., Ei ∈ N φi (v) = v (N). Symmetry (SYM) If agents i and j are interchangeable, then φi (v) = φj (v). Dummy (DUM) If agent i is a dummy player, i.e., his marginal contribution to all groups S are the same, φi (v) = v ({i}). Additivity (ADD) For any two coalitional games v and w defined over the same set of agents N, φi (v + w) = φi (v) + φi (w) for all i E N, where the game v + w is defined as (v + w) (S) = v (S) + w (S) for all S C N. We will refer to these axioms later in our proof of correctness of the algorithm for computing the Shapley value under our representation in section 4. The core is another major solution concept for coalitional games. It is a descriptive solution concept that focuses on outcomes that are "stable." Stability under core means that no set of players can jointly deviate to improve their payoffs. Formally, let x (S) denote Ei ∈ S xi. An outcome x E Rn is in the core if The core was one of the first proposed solution concepts for coalitional games, and had been studied in detail. An important question for a given coalitional game is whether the core is empty. In other words, whether there is any outcome that is stable relative to group deviation. For a game to have a non-empty core, it must satisfy the property of balancedness, defined as follows. Let 1S E Rn denote the characteristic vector of S given by Let (λS) S ⊆ N be a set of weights such that each λS is in the range between 0 and 1. This set of weights, (λS) S ⊆ N, is a balanced collection if for all i E N, By the Bondereva-Shapley theorem, the core of a coalitional game is non-empty if and only if the game is balanced. Therefore, we can use linear programming to determine whether the core of a game is empty. If the optimal value of (4) is greater than the value of the grand coalition, then the core is empty. Unfortunately, this program has an exponential number of variables in the number of players in the game, and hence an algorithm that operates directly on this program would be infeasible in practice. In section 5.4, we will describe an algorithm that answers the question of emptiness of core that works on the dual of this program instead. 2.2 Previous Work Revisited Deng and Papadimitriou looked into the complexity of various solution concepts on coalitional games played on weighted graphs in [4]. In their representation, the set of agents are the nodes of the graph, and the value of a set of agents S is the sum of the weights of the edges spanned by them. Notice that this representation is concise since the space required to specify such a game is O (n2). However, this representation is not general; it will not be able to represent interactions among three or more agents. For example, it will not be able to represent the majority game, where a group of agents S will have value of 1 if and only if s> n/2. On the other hand, there is an efficient algorithm for computing the Shapley value of the game, and for determining whether the core is empty under the restriction of positive edge weights. However, in the unrestricted case, determining whether the core is non-empty is coNP-complete. Conitzer and Sandholm in [2] considered coalitional games that are superadditive. They described a concise representation scheme that only states the value of a coalition if the value is strictly superadditive. More precisely, the semantics of the representation is that for a group of agents S, where Π is the set of all possible partitions of S. The value v (S) is only explicitly specified for S if v (S) is greater than all partitioning of S other than the trivial partition ({S}). While this representation can represent all games that are superadditive, there are coalitional games that it cannot represent. For example, it will not be able to represent any games with substitutability among the agents. An example of a game that cannot be represented is the unit game, where v (S) = 1 as long as S = 6 0. Under this representation, the authors showed that determining whether the core is non-empty is coNP-complete. In fact, even determining the value of a group of agents is NP-complete. In a more recent paper, Conitzer and Sandholm described a representation that decomposes a coalitional game into a number of subgames whose sum add up to the original game [3]. The payoffs in these subgames are then represented by their respective characteristic functions. This scheme is fully general as the characteristic form is a special case of this representation. For any given game, there may be multiple ways to decompose the game, and the decomposition may influence the computational complexity. For computing the Shapley value, the authors showed that the complexity is linear in the input description; in particular, if the largest subgame (as measured by number of agents) is of size n and the number of subgames is m, then their algorithm runs in O (m2n) time, where the input size will also be O (m2n). On the other hand, the problem of determining whether a certain outcome is in the core is coNP-complete. 3. MARGINAL CONTRIBUTION NETS In this section, we will describe the Marginal Contribution Networks representation scheme. We will show that the idea is flexible, and we can easily extend it to increase its conciseness. We will also show how we can use this scheme to represent the recommendation game from the introduction. Finally, we will show that this scheme is fully expressive, and generalizes the representation schemes in [3, 4]. 3.1 Rules and Marginal Contribution Networks The basic idea behind marginal contribution networks (MC-nets) is to represent coalitional games using sets of rules. The rules in MC-nets have the following syntactic A rule is said to apply to a group of agents S if S meets the requirement of the Pattern. In the basic scheme, these patterns are conjunctions of agents, and S meets the requirement of the given pattern if S is a superset of it. The value of a group of agents is defined to be the sum over the values of all rules that apply to the group. For example, if the set of rules are then v ({a}) = 0, v ({b}) = 2, and v ({a, b}) = 5 + 2 = 7. MC-nets is a very flexible representation scheme, and can be extended in different ways. One simple way to extend it and increase its conciseness is to allow a wider class of patterns in the rules. A pattern that we will use throughout the remainder of the paper is one that applies only in the absence of certain agents. This is useful for expressing concepts such as substitutability or default values. Formally, we express such patterns by which has the semantics that such rule will apply to a group S only if {pi} mi = 1 ∈ S and {nj} nj = 1 ∈ / S. We will call the {pi} mi = 1 in the above pattern the positive literals, and {nj} nj = 1 the negative literals. Note that if the pattern of a rule consists solely of negative literals, we will consider that the empty set of agents will also satisfy such pattern, and hence v (∅) may be non-zero in the presence of negative literals. To demonstrate the increase in conciseness of representation, consider the unit game described in section 2.2. To represent such a game without using negative literals, we will need 2n rules for n players: we need a rule of value 1 for each individual agent, a rule of value − 1 for each pair of agents to counter the double-counting, a rule of value 1 for each triplet of agents, etc., similar to the inclusion-exclusion principle. On the other hand, using negative literals, we only need n rules: value 1 for the first agent, value 1 for the second agent in the absence of the first agent, value 1 for the third agent in the absence of the first two agents, etc. . The representational savings can be exponential in the number of agents. Given a game represented as a MC-net, we can interpret the set of rules that make up the game as a graph. We call this graph the agent graph. The nodes in the graph will represent the agents in the game, and for each rule in the MCnet, we connect all the agents in the rule together and assign a value to the clique formed by the set of agents. Notice that to accommodate negative literals, we will need to annotate the clique appropriately. This alternative view of MC-nets will be useful in our algorithm for CORE-MEMBERSHIP in section 5. We would like to end our discussion of the representation scheme by mentioning a trade-off between the expressiveness of patterns and the space required to represent them. To represent a coalitional game in characteristic form, one would need to specify all 2n − 1 values. There is no overhead on top of that since there is a natural ordering of the groups. For MC-nets, however, specification of the rules requires specifying both the patterns and the values. The patterns, if not represented compactly, may end up overwhelming the savings from having fewer values to specify. The space required for the patterns also leads to a tradeoff between the expressiveness of the allowed patterns and the simplicity of representing them. However, we believe that for most naturally arising games, there should be sufficient structure in the problem such that our representation achieves a net saving over the characteristic form. 3.2 Example: Recommendation Game As an example, we will use MC-net to represent the recommendation game discussed in the introduction. For each product, as the benefit of knowing about the product will count only once for each group, we need to capture substitutability among the agents. This can be captured by a scaled unit game. Suppose the value of the knowledge about product i is vi, and there are ni agents, denoted by {xji}, who know about the product, the game for product i can then be represented as the following rules: The entire game can then be built up from the sets of rules of each product. The space requirement will be O (mn *), where m is the number of products in the system, and n * is the maximum number of agents who knows of the same product. 3.3 Representation Power We will discuss the expressiveness and conciseness of our representation scheme and compare it with the previous works in this subsection. PROPOSITION 1. Marginal contribution networks constitute a fully expressive representation scheme. PROOF. Consider an arbitrary coalitional game hN, vi in characteristic form representation. We can construct a set of rules to describe this game by starting from the singleton sets and building up the set of rules. For any singleton set {i}, we create a rule {i} → v (i). For any pair of agents {i, j}, we create a rule {i ∧ j} → v ({i, j}) − v ({i}) − v ({j}. We can continue to build up rules in a manner similar to the inclusion-exclusion principle. Since the game is arbitrary, MC-nets are fully expressive. Using the construction outlined in the proof, we can show that our representation scheme can simulate the multi-issue representation scheme of [3] in almost the same amount of space. PROOF. Given a game in multi-issue representation, we start by describing each of the subgames, which are represented in characteristic form in [3], with a set of rules. We then build up the grand game by including all the rules from the subgames. Note that our representation may require a space larger by a linear factor due to the need to describe the patterns for each rule. On the other hand, our approach may have fewer than exponential number of rules for each subgame, depending on the structure of these subgames, and therefore may be more concise than multi-issue representation. On the other hand, there are games that require exponentially more space to represent under the multi-issue scheme compared to our scheme. PROPOSITION 3. Marginal contribution networks are exponentially more concise than multi-issue representation for certain games. PROOF. Consider a unit game over all the agents N. As explained in 3.1, this game can be represented in linear space using MC-nets with negative literals. However, as there is no decomposition of this game into smaller subgames, it will require space O (2n) to represent this game under the multiissue representation. Under the agent graph interpretation of MC-nets, we can see that MC-nets is a generalization of the graphical representation in [4], namely from weighted graphs to weighted hypergraphs. PROOF. Given a game in graphical form, G, for each edge (i, j) with weight wij in the graph, we create a rule {i, j} → wij. Clearly this takes exactly the same space as the size of G, and by the additive semantics of the rules, it represents the same game as G. 4. COMPUTING THE SHAPLEY VALUE Given a MC-net, we have a simple algorithm to compute the Shapley value of the game. Considering each rule as a separate game, we start by computing the Shapley value of the agents for each rule. For each agent, we then sum up the Shapley values of that agent over all the rules. We first show that this final summing process correctly computes the Shapley value of the agents. PROOF. For any group S, under the MC-nets representation, v (S) is defined to be the sum over the values of all the rules that apply to S. Therefore, considering each rule as a game, by the (ADD) axiom discussed in section 2, the Shapley value of the game created from aggregating all the rules is equal to the sum of the Shapley values over the rules. The remaining question is how to compute the Shapley values of the rules. We can separate the analysis into two cases, one for rules with only positive literals and one for rules with mixed literals. For rules that have only positive literals, the Shapley value of the agents is v/m, where v is the value of the rule and m is the number of agents in the rule. This is a direct consequence of the (SYM) axiom of the Shapley value, as the agents in a rule are indistinguishable from each other. For rules that have both positive and negative literals, we can consider the positive and the negative literals separately. For a given positive literal i, the rule will apply only if i occurs in a given permutation after the rest of the positive literals but before any of the negative literals. Formally, let φi denote the Shapley value of i, p denote the cardinality of the positive set, and n denote the cardinality of the negative set, then φi = (p − 1)! n! v (p + n)! v = For a given negative literal j, j will be responsible for cancelling the application of the rule if all positive literals come before the negative literals in the ordering, and j is the first among the negative literals. Therefore, By the (SYM) axiom, all positive literals will have the value of φi and all negative literals will have the value of φj. Note that the sum over all agents in rules with mixed literals is 0. This is to be expected as these rules contribute 0 to the grand coalition. The fact that these rules have no effect on the grand coalition may appear odd at first. But this is because the presence of such rules is to define the values of coalitions smaller than the grand coalition. In terms of computational complexity, given that the Shapley value of any agent in a given rule can be computed in time linear in the pattern of the rule, the total running time of the algorithm for computing the Shapley value of the game is linear in the size of the input. 5. ANSWERING CORE-RELATED QUESTIONS There are a few different but related computational problems associated with the solution concept of the core. We will focus on the following two problems: Definition 1. (CORE-MEMBERSHIP) Given a coalitional game and a payoff vector x, determine if x is in the core. Definition 2. (CORE-NON-EMPTINESS) Given a coalitional game, determine if the core is non-empty. In the rest of the section, we will first show that these two problems are coNP-complete and coNP-hard respectively, and discuss some complexity considerations about these problems. We will then review the main ideas of tree decomposition as it will be used extensively in our algorithm for CORE-MEMBERSHIP. Next, we will present the algorithm for CORE-MEMBERSHIP, and show that the algorithm runs in polynomial time for graphs of bounded treewidth. We end by extending this algorithm to answer the question of CORENON-EMPTINESS in polynomial time for graphs of bounded treewidth. 5.1 Computational Complexity The hardness of CORE-MEMBERSHIP and CORE-NONEMPTINESS follows directly from the hardness results of games over weighted graphs in [4]. PROOF. CORE-MEMBERSHIP in MC-nets is in the class of coNP since any set of agents S of which v (S)> x (S) will serve as a certificate to show that x does not belong to the core. As for its hardness, given any instance of COREMEMBERSHIP for a game in graphical form of [4], we can encode the game in exactly the same space using MC-net due to Proposition 4. Since CORE-MEMBERSHIP for games in graphical form is coNP-complete, CORE-MEMBERSHIP in MC-nets is coNP-hard. PROPOSITION 7. CORE-NON-EMPTINESS for games represented as marginal contribution networks is coNP-hard. PROOF. The same argument for hardness between games in graphical frm and MC-nets holds for the problem of CORENON-EMPTINESS. We do not know of a certificate to show that CORE-NONEMPTINESS is in the class of coNP as of now. Note that the "obvious" certificate of a balanced set of weights based on the Bondereva-Shapley theorem is exponential in size. In [4], Deng and Papadimitriou showed the coNP-completeness of CORE-NON-EMPTINESS via a combinatorial characterization, namely that the core is non-empty if and only if there is no negative cut in the graph. In MC-nets, however, there need not be a negative hypercut in the graph for the core to be empty, as demonstrated by the following game Applying the Bondereva-Shapley theorem, if we let λ12 = λ13 = λ14 = 1/3, and λ234 = 2/3, this set of weights demonstrates that the game is not balanced, and hence the core is empty. On the other hand, this game can be represented with MC-nets as follows (weights on hyperedges): No matter how the set is partitioned, the sum over the weights of the hyperedges in the cut is always non-negative. To overcome the computational hardness of these problems, we have developed algorithms that are based on tree decomposition techniques. For CORE-MEMBERSHIP, our algorithm runs in time exponential only in the treewidth of the agent graph. Thus, for graphs of small treewidth, such as trees, we have a tractable solution to determine if a payoff vector is in the core. By using this procedure as a separation oracle, i.e., a procedure for returning the inequality violated by a candidate solution, to solving a linear program that is related to CORE-NON-EMPTINESS using the ellipsoid method, we can obtain a polynomial time algorithm for CORE-NON-EMPTINESS for graphs of bounded treewidth. 5.2 Review of Tree Decomposition As our algorithm for CORE-MEMBERSHIP relies heavily on tree decomposition, we will first briefly review the main ideas in tree decomposition and treewidth .3 Definition 3. A tree decomposition of a graph G = (V, E) is a pair (X, T), where T = (I, F) is a tree and X = {Xi | i ∈ I} is a family of subsets of V, one for each node of T, such that • Ui ∈ I Xi = V; • For all edges (v, w) ∈ E, there exists an i ∈ I with v ∈ Xi and w ∈ Xi; and • (Running Intersection Property) For all i, j, k ∈ I: if j is on the path from i to k in T, then Xi ∩ Xk ⊆ Xj. The treewidth of a tree decomposition is defined as the maximum cardinality over all sets in X, less one. The treewidth of a graph is defined as the minimum treewidth over all tree decompositions of the graph. Given a tree decomposition, we can convert it into a nice tree decomposition of the same treewidth, and of size linear in that of T. Definition 4. A tree decomposition T is nice if T is rooted and has four types of nodes: Leaf nodes i are leaves of T with | Xi | = 1. Introduce nodes i have one child j such that Xi = Xj ∪ {v} of some v ∈ V. Forget nodes i have one child j such that Xi = Xj \ {v} for some v ∈ Xj. Join nodes i have two children j and k with Xi = Xj = Xk. An example of a (partial) nice tree decomposition together with a classification of the different types of nodes is in Figure 1. In the following section, we will refer to nodes in the tree decomposition as nodes, and nodes in the agent graph as agents. 5.3 Algorithm for Core Membership Our algorithm for CORE-MEMBERSHIP takes as an input a nice tree decomposition T of the agent graph and a payoff vector x. By definition, if x belongs to the core, then for all groups S ⊆ N, x (S) ≥ v (S). Therefore, the difference x (S) − v (S) measures how "close" the group S is to violating the core condition. We call this difference the excess of group S. Definition 5. The excess of a coalition S, e (S), is defined as x (S) − v (S). A brute-force approach to determine if a payoff vector belongs to the core will have to check that the excesses of all groups are non-negative. However, this approach ignores the structure in the agent graph that will allow an algorithm to infer that certain groups have non-negative excesses due to Figure 1: Example of a (partial) nice tree decomposition the excesses computed elsewhere in the graph. Tree decomposition is the key to take advantage of such inferences in a structured way. For now, let us focus on rules with positive literals. Suppose we have already checked that the excesses of all sets R C _ U are non-negative, and we would like to check if the addition of an agent i to the set U will create a group with negative excess. A na ¨ ıve solution will be to compute the excesses of all sets that include i. The excess of the group where c is the cut between R and i, and v (c) is the sum of the weights of the edges in the cut. However, suppose that from the tree decomposition, we know that i is only connected to a subset of U, say S, which we will call the entry set to U. Ideally, because i does not share any edges with members of U ¯ = (U \ S), we would hope that an algorithm can take advantage of this structure by checking only sets that are subsets of (S ∪ {i}). This computational saving may be possible since (xi − v (c)) in the update equation of (6) does not depend on ¯ U. However, we cannot simply ignore U ¯ as members of U ¯ may still influence the excesses of groups that include agent i through group S. Specifically, if there exists a group T ⊃ S such that e (T) <e (S), then even when e (S ∪ {i}) has non-negative excess, e (T ∪ {i}) may have negative excess. In other words, the excess available at S may have been "drained" away due to T. This motivates the definition of the reserve of a group. Definition 6. The reserve of a coalition S relative to a coalition U is the minimum excess over all coalitions between S and U, i.e., all T: S C _ T C _ U. We denote this value by r (S, U). We will refer to the group T that has the minimum excess as arg r (S, U). We will also call U the limiting set of the reserve and S the base set of the reserve. Our algorithm works by keeping track of the reserves of all non-empty subsets that can be formed by the agents of a node at each of the nodes of the tree decomposition. Starting from the leaves of the tree and working towards the root, at each node i, our algorithm computes the reserves of all groups S C _ Xi, limited by the set of agents in the subtree rooted at i, Ti, except those in (Xi \ S). The agents in (Xi \ S) are excluded to ensure that S is an entry set. Specifically, S is the entry set to ((Ti \ Xi) ∪ S). To accomodate for negative literals, we will need to make two adjustments. Firstly, the cut between an agent m and a set S at node i now refers to the cut among agent m, set S, and set - (Xi \ S), and its value must be computed accordingly. Also, when an agent m is introduced to a group at an introduce node, we will also need to consider the change in the reserves of groups that do not include m due to possible cut involving - m and the group. As an example of the reserve values we keep track of at a tree node, consider node i of the tree in Figure 1. At node i, we will keep track of the following: where the dots...refer to the agents rooted under node m. For notational use, we will use ri (S) to denote r (S, U) at node i where U is the set of agents in the subtree rooted at node i excluding agents in (Xi \ S). We sometimes refer to these values as the r-values of a node. The details of the r-value computations are in Algorithm 1. To determine if the payoff vector x is in the core, during the r-value computation at each node, we can check if all of the r-values are non-negative. If this is so for all nodes in the tree, the payoff vector x is in the core. The correctness of the algorithm is due to the following proposition. PROPOSITION 8. The payoff vector x is not in the core if and only if the r-values at some node i for some group S is negative. PROOF. (⇐) If the reserve at some node i for some group S is negative, then there exists a coalition T for which e (T) = x (T) − v (T) <0, hence x is not in the core. (⇒) Suppose x is not in the core, then there exists some group R * such that e (R *) <0. Let Xroot be the set of nodes at the root. Consider any set S ∈ Xroot, rroot (S) will have the base set of S and the limiting set of ((N \ Xroot) ∪ S). The union over all of these ranges includes all sets U for which U ∩ Xroot = 6 ∅. Therefore, if R * is not disjoint from Xroot, the r-value for some group in the root is negative. If R * is disjoint from U, consider the forest {Ti} resulting from removal of all tree nodes that include agents in Xroot. 2: j ← child of i 3: m ← Xi \ Xj {the introduced node} 4: for all S ⊆ Xj, S = 6 ∅ do 5: C ← all hyperedges in the cut of m, S, and ¬ (Xi \ S) 6: ri (S ∪ {x}) ← rj (S) + xm − v (C) 7: C ← all hyperedges in the cut of ¬ m, S, and ¬ (Xi \ S) 8: ri (S) ← rj (S) − v (C) 9: end for S * i has negative excess as well. Therefore, we only need to check the r-values of the nodes on the individual trees in the forest. But for each tree in the forest, we can apply the same argument restricted to the agents in the tree. In the base case, we have the leaf nodes of the original tree decomposition, say, for agent i. If R * = {i}, then r ({i}) = e ({i}) <0. Therefore, by induction, if e (R *) <0, some reserve at some node would be negative. We will next explain the intuition behind the correctness of the computations for the r-values in the tree nodes. A detailed proof of correctness of these computations can be found in the appendix under Lemmas 1 and 2. PROPOSITION 9. The procedure in Algorithm 1 correctly compute the r-values at each of the tree nodes. PROOF. (SKETCH) We can perform a case analysis over the four types of tree nodes in a nice tree decomposition. Leaf nodes (i) The only reserve value to be computed is ri (Xi), which equals r (Xi, Xi), and therefore it is just the excess of group Xi. Forget nodes (i with child j) Let m be the forgotten node. For any subset S ⊆ Xi, arg ri (S) must be chosen between the groups of S and S ∪ {m}, and hence we choose between the lower of the two from the r-values at node j. Introduce nodes (i with child j) Let m be the introduced node. For any subset T ⊆ Xi that includes m, let S denote (T \ {m}). By the running intersection property, there are no rules that involve m and agents of the subtree rooted at node i except those involving m and agents in Xi. As both the base set and the limiting set of the r-values of node j and node i differ by {m}, for any group V that lies between the base set and the limiting set of node i, the excess of group V will differ by a constant amount from the corresponding group (V \ {m}) at node j. Therefore, the set arg ri (T) equals the set arg rj (S) ∪ {m}, and ri (T) = rj (S) + xm − v (cut), where v (cut) is the value of the rules in the cut between m and S. For any subset S ⊂ Xi that does not include m, we need to consider the values of rules that include ¬ m as a literal in the pattern. Also, when computing the reserve, the payoff xm will not contribute to group S. Therefore, together with the running intersection property as argued above, we can show that ri (S) = rj (S) − v (cut). Join nodes (i with left child j and right child k) For any given set S ⊆ Xi, consider the r-values of that set at j and k. If arg rj (S) or arg rk (S) includes agents not in S, then argrj (S) and argrk (S) will be disjoint from each other due to the running intersection property. Therefore, we can decompose arg ri (S) into three sets, (arg rj (S) \ S) on the left, S in the middle, and (arg rk (S) \ S) on the right. The reserve rj (S) will cover the excesses on the left and in the middle, whereas the reserve rk (S) will cover those on the right and in the middle, and so the excesses in the middle is double-counted. We adjust for the double-counting by subtracting the excesses in the middle from the sum of the two reserves rj (S) and rk (S). By strong duality, optimal value of (7) is equal to optimal value of (4), the primal program described in section 2. Therefore, by the Bondereva-Shapley theorem, if the optimal value of (7) is greater than v (N), the core is empty. We can solve the dual program using the ellipsoid method with CORE-MEMBERSHIP as a separation oracle, i.e., a procedure for returning a constraint that is violated. Note that a simple extension to the CORE-MEMBERSHIP algorithm will allow us to keep track of the set T for which e (T) <0 during the r-values computation, and hence we can return the inequality about T as the constraint violated. Therefore, CORE-NON-EMPTINESS can run in time polynomial in the running time of CORE-MEMBERSHIP, which in turn runs in Finally, note that each step in the computation of the rvalues of each node i takes time at most exponential in the size of Xi, hence the algorithm runs in time exponential only in the treewidth of the graph. 5.4 Algorithm for Core Non-emptiness We can extend the algorithm for CORE-MEMBERSHIP into an algorithm for CORE-NON-EMPTINESS. As described in section 2, whether the core is empty can be checked using the optimization program based on the balancedness condition (3). Unfortunately, that program has an exponential number of variables. On the other hand, the dual of the program has only n variables, and can be written as follows: n minimize i = 1 xi xERn time exponential only in the treewidth of the graph. Note that when the core is not empty, this program will return an outcome in the core. 6. CONCLUDING REMARKS We have developed a fully expressive representation scheme for coalitional games of which the size depends on the complexity of the interactions among the agents. Our focus on general representation is in contrast to the approach taken in [3, 4]. We have also developed an efficient algorithm for the computation of the Shapley values for this representation. While CORE-MEMBERSHIP for MC-nets is coNP-complete, we have developed an algorithm for COREMEMBERSHIP that runs in time exponential only in the treewidth of the agent graph. We have also extended the algorithm to solve CORE-NON-EMPTINESS. Other than the algorithm for CORE-NON-EMPTINESS in [4] under the restriction of non-negative edge weights, and that in [2] for superadditive games when the value of the grand coalition is given, we are not aware of any explicit description of algorithms for core-related problems in the literature. The work in this paper is related to a number of areas in computer science, especially in artificial intelligence. For example, the graphical interpretation of MC-nets is closely related to Markov random fields (MRFs) of the Bayes nets community. They both address the issue of of conciseness of representation by using the combinatorial structure of weighted hypergraphs. In fact, Kearns et al. first apply these idea to games theory by introducing a representation scheme derived from Bayes net to represent non-cooperative games [6]. The representational issues faced in coalitional games are closely related to the problem of expressing valuations in combinatorial auctions [5, 10]. The OR-bid language, for example, is strongly related to superadditivity. The question of the representation power of different patterns is also related to Boolean expression complexity [12]. We believe that with a better understanding of the relationships among these related areas, we may be able to develop more efficient representations and algorithms for coalitional games. Finally, we would like to end with some ideas for extending the work in this paper. One direction to increase the conciseness of MC-nets is to allow the definition of equivalent classes of agents, similar to the idea of extending Bayes nets to probabilistic relational models. The concept of symmetry is prevalent in games, and the use of classes of agents will allow us to capture symmetry naturally and concisely. This will also address the problem of unpleasing assymetric representations of symmetric games in our representation. Along the line of exploiting symmetry, as the agents within the same class are symmetric with respect to each other, we can extend the idea above by allowing functional description of marginal contributions. More concretely, we can specify the value of a rule as dependent on the number of agents of each relevant class. The use of functions will allow concise description of marginal diminishing returns (MDRs). Without the use of functions, the space needed to describe MDRs among n agents in MC-nets is O (n). With the use of functions, the space required can be reduced to O (1). Another idea to extend MC-nets is to augment the semantics to allow constructs that specify certain rules cannot be applied simultaneously. This is useful in situations where a certain agent represents a type of exhaustible resource, and therefore rules that depend on the presence of the agent should not apply simultaneously. For example, if agent i in the system stands for coal, we can either use it as fuel for a power plant or as input to a steel mill for making steel, but not for both at the same time. Currently, to represent such situations, we have to specify rules to cancel out the effects of applications of different rules. The augmented semantics can simplify the representation by specifying when rules cannot be applied together.
Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games * ABSTRACT We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents. This representation scheme captures characteristics of the interactions among the agents in a natural and concise manner. We also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation. The Shapley value can be computed in time linear in the size of the input. The emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation. 1. INTRODUCTION Agents can often benefit by coordinating their actions. Coalitional games capture these opportunities of coordination by explicitly modeling the ability of the agents to take joint actions as primitives. As an abstraction, coalitional games assign a payoff to each group of agents in the game. This payoff is intended to reflect the payoff the group of agents can secure for themselves regardless of the actions of the agents not in the group. These choices of primitives are in contrast to those of non-cooperative games, of which agents are modeled independently, and their payoffs depend critically on the actions chosen by the other agents. 1.1 Coalitional Games and E-Commerce Coalitional games have appeared in the context of e-commerce. In [7], Kleinberg et al. use coalitional games to study recommendation systems. In their model, each individual knows about a certain set of items, is interested in learning about all items, and benefits from finding out about them. The payoffs to groups of agents are the total number of distinct items known by its members. Given this coalitional game setting, Kleinberg et al. compute the value of the private information of the agents is worth to the system using the solution concept of the Shapley value (definition can be found in section 2). These values can then be used to determine how much each agent should receive for participating in the system. As another example, consider the economics behind supply chain formation. The increased use of the Internet as a medium for conducting business has decreased the costs for companies to coordinate their actions, and therefore coalitional game is a good model for studying the supply chain problem. Suppose that each manufacturer purchases his raw materials from some set of suppliers, and that the suppliers offer higher discount with more purchases. The decrease in communication costs will let manufacturers find others interested in the same set of suppliers cheaper, and facilitates formation of coalitions to bargain with the suppliers. Depending on the set of suppliers and how much from each supplier each coalition purchases, we can assign payoffs to the coalitions depending on the discount it receives. The resulting game can be analyzed using coalitional game theory, and we can answer questions such as the stability of coalitions, and how to fairly divide the benefits among the participating manufacturers. A similar problem, combinatorial coalition formation, has previously been studied in [8]. 1.2 Evaluation Criteria for Coalitional Game Representation 1.3 Previous Work 1.4 Summary of Our Contributions 2. PRELIMINARIES 2.1 Technical Background 2.2 Previous Work Revisited 3. MARGINAL CONTRIBUTION NETS 3.1 Rules and Marginal Contribution Networks 3.2 Example: Recommendation Game 3.3 Representation Power 4. COMPUTING THE SHAPLEY VALUE 5. ANSWERING CORE-RELATED QUESTIONS 5.1 Computational Complexity 5.2 Review of Tree Decomposition 5.3 Algorithm for Core Membership 5.4 Algorithm for Core Non-emptiness 6. CONCLUDING REMARKS We have developed a fully expressive representation scheme for coalitional games of which the size depends on the complexity of the interactions among the agents. Our focus on general representation is in contrast to the approach taken in [3, 4]. We have also developed an efficient algorithm for the computation of the Shapley values for this representation. While CORE-MEMBERSHIP for MC-nets is coNP-complete, we have developed an algorithm for COREMEMBERSHIP that runs in time exponential only in the treewidth of the agent graph. We have also extended the algorithm to solve CORE-NON-EMPTINESS. Other than the algorithm for CORE-NON-EMPTINESS in [4] under the restriction of non-negative edge weights, and that in [2] for superadditive games when the value of the grand coalition is given, we are not aware of any explicit description of algorithms for core-related problems in the literature. The work in this paper is related to a number of areas in computer science, especially in artificial intelligence. For example, the graphical interpretation of MC-nets is closely related to Markov random fields (MRFs) of the Bayes nets community. They both address the issue of of conciseness of representation by using the combinatorial structure of weighted hypergraphs. In fact, Kearns et al. first apply these idea to games theory by introducing a representation scheme derived from Bayes net to represent non-cooperative games [6]. The representational issues faced in coalitional games are closely related to the problem of expressing valuations in combinatorial auctions [5, 10]. The OR-bid language, for example, is strongly related to superadditivity. The question of the representation power of different patterns is also related to Boolean expression complexity [12]. We believe that with a better understanding of the relationships among these related areas, we may be able to develop more efficient representations and algorithms for coalitional games. Finally, we would like to end with some ideas for extending the work in this paper. One direction to increase the conciseness of MC-nets is to allow the definition of equivalent classes of agents, similar to the idea of extending Bayes nets to probabilistic relational models. The concept of symmetry is prevalent in games, and the use of classes of agents will allow us to capture symmetry naturally and concisely. This will also address the problem of unpleasing assymetric representations of symmetric games in our representation. Along the line of exploiting symmetry, as the agents within the same class are symmetric with respect to each other, we can extend the idea above by allowing functional description of marginal contributions. More concretely, we can specify the value of a rule as dependent on the number of agents of each relevant class. The use of functions will allow concise description of marginal diminishing returns (MDRs). Without the use of functions, the space needed to describe MDRs among n agents in MC-nets is O (n). With the use of functions, the space required can be reduced to O (1). Another idea to extend MC-nets is to augment the semantics to allow constructs that specify certain rules cannot be applied simultaneously. This is useful in situations where a certain agent represents a type of exhaustible resource, and therefore rules that depend on the presence of the agent should not apply simultaneously. For example, if agent i in the system stands for coal, we can either use it as fuel for a power plant or as input to a steel mill for making steel, but not for both at the same time. Currently, to represent such situations, we have to specify rules to cancel out the effects of applications of different rules. The augmented semantics can simplify the representation by specifying when rules cannot be applied together.
Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games * ABSTRACT We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents. This representation scheme captures characteristics of the interactions among the agents in a natural and concise manner. We also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation. The Shapley value can be computed in time linear in the size of the input. The emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation. 1. INTRODUCTION Agents can often benefit by coordinating their actions. Coalitional games capture these opportunities of coordination by explicitly modeling the ability of the agents to take joint actions as primitives. As an abstraction, coalitional games assign a payoff to each group of agents in the game. This payoff is intended to reflect the payoff the group of agents can secure for themselves regardless of the actions of the agents not in the group. These choices of primitives are in contrast to those of non-cooperative games, of which agents are modeled independently, and their payoffs depend critically on the actions chosen by the other agents. 1.1 Coalitional Games and E-Commerce Coalitional games have appeared in the context of e-commerce. In [7], Kleinberg et al. use coalitional games to study recommendation systems. The payoffs to groups of agents are the total number of distinct items known by its members. Given this coalitional game setting, Kleinberg et al. compute the value of the private information of the agents is worth to the system using the solution concept of the Shapley value (definition can be found in section 2). These values can then be used to determine how much each agent should receive for participating in the system. As another example, consider the economics behind supply chain formation. The increased use of the Internet as a medium for conducting business has decreased the costs for companies to coordinate their actions, and therefore coalitional game is a good model for studying the supply chain problem. Depending on the set of suppliers and how much from each supplier each coalition purchases, we can assign payoffs to the coalitions depending on the discount it receives. The resulting game can be analyzed using coalitional game theory, and we can answer questions such as the stability of coalitions, and how to fairly divide the benefits among the participating manufacturers. A similar problem, combinatorial coalition formation, has previously been studied in [8]. 6. CONCLUDING REMARKS We have developed a fully expressive representation scheme for coalitional games of which the size depends on the complexity of the interactions among the agents. Our focus on general representation is in contrast to the approach taken in [3, 4]. We have also developed an efficient algorithm for the computation of the Shapley values for this representation. While CORE-MEMBERSHIP for MC-nets is coNP-complete, we have developed an algorithm for COREMEMBERSHIP that runs in time exponential only in the treewidth of the agent graph. We have also extended the algorithm to solve CORE-NON-EMPTINESS. For example, the graphical interpretation of MC-nets is closely related to Markov random fields (MRFs) of the Bayes nets community. They both address the issue of of conciseness of representation by using the combinatorial structure of weighted hypergraphs. In fact, Kearns et al. first apply these idea to games theory by introducing a representation scheme derived from Bayes net to represent non-cooperative games [6]. The representational issues faced in coalitional games are closely related to the problem of expressing valuations in combinatorial auctions [5, 10]. The OR-bid language, for example, is strongly related to superadditivity. The question of the representation power of different patterns is also related to Boolean expression complexity [12]. We believe that with a better understanding of the relationships among these related areas, we may be able to develop more efficient representations and algorithms for coalitional games. Finally, we would like to end with some ideas for extending the work in this paper. One direction to increase the conciseness of MC-nets is to allow the definition of equivalent classes of agents, similar to the idea of extending Bayes nets to probabilistic relational models. The concept of symmetry is prevalent in games, and the use of classes of agents will allow us to capture symmetry naturally and concisely. This will also address the problem of unpleasing assymetric representations of symmetric games in our representation. Along the line of exploiting symmetry, as the agents within the same class are symmetric with respect to each other, we can extend the idea above by allowing functional description of marginal contributions. More concretely, we can specify the value of a rule as dependent on the number of agents of each relevant class. The use of functions will allow concise description of marginal diminishing returns (MDRs). Without the use of functions, the space needed to describe MDRs among n agents in MC-nets is O (n). Another idea to extend MC-nets is to augment the semantics to allow constructs that specify certain rules cannot be applied simultaneously. This is useful in situations where a certain agent represents a type of exhaustible resource, and therefore rules that depend on the presence of the agent should not apply simultaneously. Currently, to represent such situations, we have to specify rules to cancel out the effects of applications of different rules. The augmented semantics can simplify the representation by specifying when rules cannot be applied together.
H-85
Learning User Interaction Models for Predicting Web Search Result Preferences
Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected noisy user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods.
[ "user prefer", "click spam detect", "person", "implicit feedback", "clickthrough", "relev measur", "inform retriev", "top relev document posit", "induc weight", "predict model", "page dwell time", "follow-up queri", "explicit relev judgment", "predict behavior model", "recal measur", "precis measur", "low recal", "web search rank", "search abus detect", "domain-specif rank", "interpret implicit relev feedback", "user behavior model", "predict relev prefer" ]
[ "P", "P", "P", "P", "P", "U", "M", "U", "U", "R", "U", "U", "U", "R", "U", "U", "U", "R", "M", "M", "M", "R", "M" ]
Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Microsoft Research eugeneag@microsoft.com Eric Brill Microsoft Research brill@microsoft.com Susan Dumais Microsoft Research sdumais@microsoft.com Robert Ragno Microsoft Research rragno@microsoft.com ABSTRACT Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected noisy user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Search process, relevance feedback. General Terms Algorithms, Measurement, Performance, Experimentation. 1. INTRODUCTION Relevance measurement is crucial to web search and to information retrieval in general. Traditionally, search relevance is measured by using human assessors to judge the relevance of query-document pairs. However, explicit human ratings are expensive and difficult to obtain. At the same time, millions of people interact daily with web search engines, providing valuable implicit feedback through their interactions with the search results. If we could turn these interactions into relevance judgments, we could obtain large amounts of data for evaluating, maintaining, and improving information retrieval systems. Recently, automatic or implicit relevance feedback has developed into an active area of research in the information retrieval community, at least in part due to an increase in available resources and to the rising popularity of web search. However, most traditional IR work was performed over controlled test collections and carefully-selected query sets and tasks. Therefore, it is not clear whether these techniques will work for general real-world web search. A significant distinction is that web search is not controlled. Individual users may behave irrationally or maliciously, or may not even be real users; all of this affects the data that can be gathered. But the amount of the user interaction data is orders of magnitude larger than anything available in a non-web-search setting. By using the aggregated behavior of large numbers of users (and not treating each user as an individual expert) we can correct for the noise inherent in individual interactions, and generate relevance judgments that are more accurate than techniques not specifically designed for the web search setting. Furthermore, observations and insights obtained in laboratory settings do not necessarily translate to real world usage. Hence, it is preferable to automatically induce feedback interpretation strategies from large amounts of user interactions. Automatically learning to interpret user behavior would allow systems to adapt to changing conditions, changing user behavior patterns, and different search settings. We present techniques to automatically interpret the collective behavior of users interacting with a web search engine to predict user preferences for search results. Our contributions include: • A distributional model of user behavior, robust to noise within individual user sessions, that can recover relevance preferences from user interactions (Section 3). • Extensions of existing clickthrough strategies to include richer browsing and interaction features (Section 4). • A thorough evaluation of our user behavior models, as well as of previously published state-of-the-art techniques, over a large set of web search sessions (Sections 5 and 6). We discuss our results and outline future directions and various applications of this work in Section 7, which concludes the paper. 2. BACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval. The most common approaches in the context of the web use both the similarity of the query to the page content, and the overall quality of a page [3, 20]. A state-ofthe-art search engine may use hundreds of features to describe a candidate page, employing sophisticated algorithms to rank pages based on these features. Current search engines are commonly tuned on human relevance judgments. Human annotators rate a set of pages for a query according to perceived relevance, creating the gold standard against which different ranking algorithms can be evaluated. Reducing the dependence on explicit human judgments by using implicit relevance feedback has been an active topic of research. Several research groups have evaluated the relationship between implicit measures and user interest. In these studies, both reading time and explicit ratings of interest are collected. Morita and Shinoda [14] studied the amount of time that users spent reading Usenet news articles and found that reading time could predict a user``s interest levels. Konstan et al. [13] showed that reading time was a strong predictor of user interest in their GroupLens system. Oard and Kim [15] studied whether implicit feedback could substitute for explicit ratings in recommender systems. More recently, Oard and Kim [16] presented a framework for characterizing observable user behaviors using two dimensions-the underlying purpose of the observed behavior and the scope of the item being acted upon. Goecks and Shavlik [8] approximated human labels by collecting a set of page activity measures while users browsed the World Wide Web. The authors hypothesized correlations between a high degree of page activity and a user``s interest. While the results were promising, the sample size was small and the implicit measures were not tested against explicit judgments of user interest. Claypool et al. [6] studied how several implicit measures related to the interests of the user. They developed a custom browser called the Curious Browser to gather data, in a computer lab, about implicit interest indicators and to probe for explicit judgments of Web pages visited. Claypool et al. found that the time spent on a page, the amount of scrolling on a page, and the combination of time and scrolling have a strong positive relationship with explicit interest, while individual scrolling methods and mouse-clicks were not correlated with explicit interest. Fox et al. [7] explored the relationship between implicit and explicit measures in Web search. They built an instrumented browser to collect data and then developed Bayesian models to relate implicit measures and explicit relevance judgments for both individual queries and search sessions. They found that clickthrough was the most important individual variable but that predictive accuracy could be improved by using additional variables, notably dwell time on a page. Joachims [9] developed valuable insights into the collection of implicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions. More recently, Joachims et al. [10] presented an empirical evaluation of interpreting clickthrough evidence. By performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthrough events in a controlled, laboratory setting. A more comprehensive overview of studies of implicit measures is described in Kelly and Teevan [12]. Unfortunately, the extent to which existing research applies to real-world web search is unclear. In this paper, we build on previous research to develop robust user behavior interpretation models for the real web search setting. 3. LEARNING USER BEHAVIOR MODELS As we noted earlier, real web search user behavior can be noisy in the sense that user behaviors are only probabilistically related to explicit relevance judgments and preferences. Hence, instead of treating each user as a reliable expert, we aggregate information from many unreliable user search session traces. Our main approach is to model user web search behavior as if it were generated by two components: a relevance component - queryspecific behavior influenced by the apparent result relevance, and a background component - users clicking indiscriminately. Our general idea is to model the deviations from the expected user behavior. Hence, in addition to basic features, which we will describe in detail in Section 3.2, we compute derived features that measure the deviation of the observed feature value for a given search result from the expected values for a result, with no query-dependent information. We motivate our intuitions with a particularly important behavior feature, result clickthrough, analyzed next, and then introduce our general model of user behavior that incorporates other user actions (Section 3.2). 3.1 A Case Study in Click Distributions As we discussed, we aggregate statistics across many user sessions. A click on a result may mean that some user found the result summary promising; it could also be caused by people clicking indiscriminately. In general, individual user behavior, clickthrough and otherwise, is noisy, and cannot be relied upon for accurate relevance judgments. The data set is described in more detail in Section 5.2. For the present it suffices to note that we focus on a random sample of 3,500 queries that were randomly sampled from query logs. For these queries we aggregate click data over more than 120,000 searches performed over a three week period. We also have explicit relevance judgments for the top 10 results for each query. Figure 3.1 shows the relative clickthrough frequency as a function of result position. The aggregated click frequency at result position p is calculated by first computing the frequency of a click at p for each query (i.e., approximating the probability that a randomly chosen click for that query would land on position p). These frequencies are then averaged across queries and normalized so that relative frequency of a click at the top position is 1. The resulting distribution agrees with previous observations that users click more often on top-ranked results. This reflects the fact that search engines do a reasonable job of ranking results as well as biases to click top results and noisewe attempt to separate these components in the analysis that follows. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 result position RelativeClickFrequency Figure 3.1: Relative click frequency for top 30 result positions over 3,500 queries and 120,000 searches. First we consider the distribution of clicks for the relevant documents for these queries. Figure 3.2 reports the aggregated click distribution for queries with varying Position of Top Relevant document (PTR). While there are many clicks above the first relevant document for each distribution, there are clearly peaks in click frequency for the first relevant result. For example, for queries with top relevant result in position 2, the relative click frequency at that position (second bar) is higher than the click frequency at other positions for these queries. Nevertheless, many users still click on the non-relevant results in position 1 for such queries. This shows a stronger property of the bias in the click distribution towards top results - users click more often on results that are ranked higher, even when they are not relevant. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 5 10 result position relativeclickfrequency PTR=1 PTR=2 PTR=3 PTR=5 PTR=10 Background Figure 3.2: Relative click frequency for queries with varying PTR (Position of Top Relevant document). -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 1 2 3 5 10 result position correctedrelativeclickfrequency PTR=1 PTR=2 PTR=3 PTR=5 PTR=10 Figure 3.3: Relative corrected click frequency for relevant documents with varying PTR (Position of Top Relevant). If we subtract the background distribution of Figure 3.1 from the mixed distribution of Figure 3.2, we obtain the distribution in Figure 3.3, where the remaining click frequency distribution can be interpreted as the relevance component of the results. Note that the corrected click distribution correlates closely with actual result relevance as explicitly rated by human judges. 3.2 Robust User Behavior Model Clicks on search results comprise only a small fraction of the post-search activities typically performed by users. We now introduce our techniques for going beyond the clickthrough statistics and explicitly modeling post-search user behavior. Although clickthrough distributions are heavily biased towards top results, we have just shown how the `relevance-driven'' click distribution can be recovered by correcting for the prior, background distribution. We conjecture that other aspects of user behavior (e.g., page dwell time) are similarly distorted. Our general model includes two feature types for describing user behavior: direct and deviational where the former is the directly measured values, and latter is deviation from the expected values estimated from the overall (query-independent) distributions for the corresponding directly observed features. More formally, we postulate that the observed value o of a feature f for a query q and result r can be expressed as a mixture of two components: ),,()(),,( frqrelfCfrqo += (1) where )( fC is the prior background distribution for values of f aggregated across all queries, and rel(q,r,f) is the component of the behavior influenced by the relevance of the result r. As illustrated above with the clickthrough feature, if we subtract the background distribution (i.e., the expected clickthrough for a result at a given position) from the observed clickthrough frequency at a given position, we can approximate the relevance component of the clickthrough value1 . In order to reduce the effect of individual user variations in behavior, we average observed feature values across all users and search sessions for each query-URL pair. This aggregation gives additional robustness of not relying on individual noisy user interactions. In summary, the user behavior for a query-URL pair is represented by a feature vector that includes both the directly observed features and the derived, corrected feature values. We now describe the actual features we use to represent user behavior. 3.3 Features for Representing User Behavior Our goal is to devise a sufficiently rich set of features that allow us to characterize when a user will be satisfied with a web search result. Once the user has submitted a query, they perform many different actions (reading snippets, clicking results, navigating, refining their query) which we capture and summarize. This information was obtained via opt-in client-side instrumentation from users of a major web search engine. This rich representation of user behavior is similar in many respects to the recent work by Fox et al. [7]. An important difference is that many of our features are (by design) query specific whereas theirs was (by design) a general, queryindependent model of user behavior. Furthermore, we include derived, distributional features computed as described above. The features we use to represent user search interactions are summarized in Table 3.1. For clarity, we organize the features into the groups Query-text, Clickthrough, and Browsing. Query-text features: Users decide which results to examine in more detail by looking at the result title, URL, and summary - in some cases, looking at the original document is not even necessary. To model this aspect of user experience we defined features to characterize the nature of the query and its relation to the snippet text. These include features such as overlap between the words in title and in query (TitleOverlap), the fraction of words shared by the query and the result summary (SummaryOverlap), etc.. Browsing features: Simple aspects of the user web page interactions can be captured and quantified. These features are used to characterize interactions with pages beyond the results page. For example, we compute how long users dwell on a page (TimeOnPage) or domain (TimeOnDomain), and the deviation of dwell time from expected page dwell time for a query. These features allows us to model intra-query diversity of page browsing behavior (e.g., navigational queries, on average, are likely to have shorter page dwell time than transactional or informational queries). We include both the direct features and the derived features described above. Clickthrough features: Clicks are a special case of user interaction with the search engine. We include all the features necessary to learn the clickthrough-based strategies described in Sections 4.1 and 4.4. For example, for a query-URL pair we provide the number of clicks for the result (ClickFrequency), as 1 Of course, this is just a rough estimate, as the observed background distribution also includes the relevance component. well as whether there was a click on result below or above the current URL (IsClickBelow, IsClickAbove). The derived feature values such as ClickRelativeFrequency and ClickDeviation are computed as described in Equation 1. Query-text features TitleOverlap Fraction of shared words between query and title SummaryOverlap Fraction of shared words between query and summary QueryURLOverlap Fraction of shared words between query and URL QueryDomainOverlap Fraction of shared words between query and domain QueryLength Number of tokens in query QueryNextOverlap Average fraction of words shared with next query Browsing features TimeOnPage Page dwell time CumulativeTimeOnPage Cumulative time for all subsequent pages after search TimeOnDomain Cumulative dwell time for this domain TimeOnShortUrl Cumulative time on URL prefix, dropping parameters IsFollowedLink 1 if followed link to result, 0 otherwise IsExactUrlMatch 0 if aggressive normalization used, 1 otherwise IsRedirected 1 if initial URL same as final URL, 0 otherwise IsPathFromSearch 1 if only followed links after query, 0 otherwise ClicksFromSearch Number of hops to reach page from query AverageDwellTime Average time on page for this query DwellTimeDeviation Deviation from overall average dwell time on page CumulativeDeviation Deviation from average cumulative time on page DomainDeviation Deviation from average time on domain ShortURLDeviation Deviation from average time on short URL Clickthrough features Position Position of the URL in Current ranking ClickFrequency Number of clicks for this query, URL pair ClickRelativeFrequency Relative frequency of a click for this query and URL ClickDeviation Deviation from expected click frequency IsNextClicked 1 if there is a click on next position, 0 otherwise IsPreviousClicked 1 if there is a click on previous position, 0 otherwise IsClickAbove 1 if there is a click above, 0 otherwise IsClickBelow 1 if there is click below, 0 otherwise Table 3.1: Features used to represent post-search interactions for a given query and search result URL 3.4 Learning a Predictive Behavior Model Having described our features, we now turn to the actual method of mapping the features to user preferences. We attempt to learn a general implicit feedback interpretation strategy automatically instead of relying on heuristics or insights. We consider this approach to be preferable to heuristic strategies, because we can always mine more data instead of relying (only) on our intuition and limited laboratory evidence. Our general approach is to train a classifier to induce weights for the user behavior features, and consequently derive a predictive model of user preferences. The training is done by comparing a wide range of implicit behavior measures with explicit user judgments for a set of queries. For this, we use a large random sample of queries in the search query log of a popular web search engine, the sets of results (identified by URLs) returned for each of the queries, and any explicit relevance judgments available for each query/result pair. We can then analyze the user behavior for all the instances where these queries were submitted to the search engine. To learn the mapping from features to relevance preferences, we use a scalable implementation of neural networks, RankNet [4], capable of learning to rank a set of given items. More specifically, for each judged query we check if a result link has been judged. If so, the label is assigned to the query/URL pair and to the corresponding feature vector for that search result. These vectors of feature values corresponding to URLs judged relevant or non-relevant by human annotators become our training set. RankNet has demonstrated excellent performance in learning to rank objects in a supervised setting, hence we use RankNet for our experiments. 4. PREDICTING USER PREFERENCES In our experiments, we explore several models for predicting user preferences. These models range from using no implicit user feedback to using all available implicit user feedback. Ranking search results to predict user preferences is a fundamental problem in information retrieval. Most traditional IR and web search approaches use a combination of page and link features to rank search results, and a representative state-ofthe-art ranking system will be used as our baseline ranker (Section 4.1). At the same time, user interactions with a search engine provide a wealth of information. A commonly considered type of interaction is user clicks on search results. Previous work [9], as described above, also examined which results were skipped (e.g., `skip above'' and `skip next'') and other related strategies to induce preference judgments from the users'' skipping over results and not clicking on following results. We have also added refinements of these strategies to take into account the variability observed in realistic web scenarios. . We describe these strategies in Section 4.2. As clickthroughs are just one aspect of user interaction, we extend the relevance estimation by introducing a machine learning model that incorporates clicks as well as other aspects of user behavior, such as follow-up queries and page dwell time (Section 4.3). We conclude this section by briefly describing our baseline - a state-of-the-art ranking algorithm used by an operational web search engine. 4.1 Baseline Model A key question is whether browsing behavior can provide information absent from existing explicit judgments used to train an existing ranker. For our baseline system we use a state-of-theart page ranking system currently used by a major web search engine. Hence, we will call this system Current for the subsequent discussion. While the specific algorithms used by the search engine are beyond the scope of this paper, the algorithm ranks results based on hundreds of features such as query to document similarity, query to anchor text similarity, and intrinsic page quality. The Current web search engine rankings provide a strong system for comparison and experiments of the next two sections. 4.2 Clickthrough Model If we assume that every user click was motivated by a rational process that selected the most promising result summary, we can then interpret each click as described in Joachims et al.[10]. By studying eye tracking and comparing clicks with explicit judgments, they identified a few basic strategies. We discuss the two strategies that performed best in their experiments, Skip Above and Skip Next. Strategy SA (Skip Above): For a set of results for a query and a clicked result at position p, all unclicked results ranked above p are predicted to be less relevant than the result at p. In addition to information about results above the clicked result, we also have information about the result immediately following the clicked one. Eye tracking study performed by Joachims et al. [10] showed that users usually consider the result immediately following the clicked result in current ranking. Their Skip Next strategy uses this observation to predict that a result following the clicked result at p is less relevant than the clicked result, with accuracy comparable to the SA strategy above. For better coverage, we combine the SA strategy with this extension to derive the Skip Above + Skip Next strategy: Strategy SA+N (Skip Above + Skip Next): This strategy predicts all un-clicked results immediately following a clicked result as less relevant than the clicked result, and combines these predictions with those of the SA strategy above. We experimented with variations of these strategies, and found that SA+N outperformed both SA and the original Skip Next strategy, so we will consider the SA and SA+N strategies in the rest of the paper. These strategies are motivated and empirically tested for individual users in a laboratory setting. As we will show, these strategies do not work as well in real web search setting due to inherent inconsistency and noisiness of individual users'' behavior. The general approach for using our clickthrough models directly is to filter clicks to those that reflect higher-than-chance click frequency. We then use the same SA and SA+N strategies, but only for clicks that have higher-than-expected frequency according to our model. For this, we estimate the relevance component rel(q,r,f) of the observed clickthrough feature f as the deviation from the expected (background) clickthrough distribution )( fC . Strategy CD (deviation d): For a given query, compute the observed click frequency distribution o(r, p) for all results r in positions p. The click deviation for a result r in position p, dev(r, p) is computed as: )(),(),( pCproprdev −= where C(p) is the expected clickthrough at position p. If dev(r,p)>d, retain the click as input to the SA+N strategy above, and apply SA+N strategy over the filtered set of click events. The choice of d selects the tradeoff between recall and precision. While the above strategy extends SA and SA+N, it still assumes that a (filtered) clicked result is preferred over all unclicked results presented to the user above a clicked position. However, for informational queries, multiple results may be clicked, with varying frequency. Hence, it is preferable to individually compare results for a query by considering the difference between the estimated relevance components of the click distribution of the corresponding query results. We now define a generalization of the previous clickthrough interpretation strategy: Strategy CDiff (margin m): Compute deviation dev(r,p) for each result r1...rn in position p. For each pair of results ri and rj, predict preference of ri over rj iff dev(ri,pi)-dev(ri,pj)>m. As in CD, the choice of m selects the tradeoff between recall and precision. The pairs may be preferred in the original order or in reverse of it. Given the margin, two results might be effectively indistinguishable, but only one can possibly be preferred over the other. Intuitively, CDiff generalizes the skip idea above to include cases where the user skipped (i.e., clicked less than expected) on uj and preferred (i.e., clicked more than expected) on ui. Furthermore, this strategy allows for differentiation within the set of clicked results, making it more appropriate to noisy user behavior. CDiff and CD are complimentary. CDiff is a generalization of the clickthrough frequency model of CD, but it ignores the positional information used in CD. Hence, combining the two strategies to improve coverage is a natural approach: Strategy CD+CDiff (deviation d, margin m): Union of CD and CDiff predictions. Other variations of the above strategies were considered, but these five methods cover the range of observed performance. 4.3 General User Behavior Model The strategies described in the previous section generate orderings based solely on observed clickthrough frequencies. As we discussed, clickthrough is just one, albeit important, aspect of user interactions with web search engine results. We now present our general strategy that relies on the automatically derived predictive user behavior models (Section 3). The UserBehavior Strategy: For a given query, each result is represented with the features in Table 3.1. Relative user preferences are then estimated using the learned user behavior model described in Section 3.4. Recall that to learn a predictive behavior model we used the features from Table 3.1 along with explicit relevance judgments as input to RankNet which learns an optimal weighting of features to predict preferences. This strategy models user interaction with the search engine, allowing it to benefit from the wisdom of crowds interacting with the results and the pages beyond. As our experiments in the subsequent sections demonstrate, modeling a richer set of user interactions beyond clickthroughs results in more accurate predictions of user preferences. 5. EXPERIMENTAL SETUP We now describe our experimental setup. We first describe the methodology used, including our evaluation metrics (Section 5.1). Then we describe the datasets (Section 5.2) and the methods we compared in this study (Section 5.3). 5.1 Evaluation Methodology and Metrics Our evaluation focuses on the pairwise agreement between preferences for results. This allows us to compare to previous work [9,10]. Furthermore, for many applications such as tuning ranking functions, pairwise preference can be used directly for training [1,4,9]. The evaluation is based on comparing preferences predicted by various models to the correct preferences derived from the explicit user relevance judgments. We discuss other applications of our models beyond web search ranking in Section 7. To create our set of test pairs we take each query and compute the cross-product between all search results, returning preferences for pairs according to the order of the associated relevance labels. To avoid ambiguity in evaluation, we discard all ties (i.e., pairs with equal label). In order to compute the accuracy of our preference predictions with respect to the correct preferences, we adapt the standard Recall and Precision measures [20]. While our task of computing pairwise agreement is different from the absolute relevance ranking task, the metrics are used in the similar way. Specifically, we report the average query recall and precision. For our task, Query Precision and Query Recall for a query q are defined as: • Query Precision: Fraction of predicted preferences for results for q that agree with preferences obtained from explicit human judgment. • Query Recall: Fraction of preferences obtained from explicit human judgment for q that were correctly predicted. The overall Recall and Precision are computed as the average of Query Recall and Query Precision, respectively. A drawback of this evaluation measure is that some preferences may be more valuable than others, which pairwise agreement does not capture. We discuss this issue further when we consider extensions to the current work in Section 7. 5.2 Datasets For evaluation we used 3,500 queries that were randomly sampled from query logs(for a major web search engine. For each query the top 10 returned search results were manually rated on a 6-point scale by trained judges as part of ongoing relevance improvement effort. In addition for these queries we also had user interaction data for more than 120,000 instances of these queries. The user interactions were harvested from anonymous browsing traces that immediately followed a query submitted to the web search engine. This data collection was part of voluntary opt-in feedback submitted by users from October 11 through October 31. These three weeks (21 days) of user interaction data was filtered to include only the users in the English-U.S. market. In order to better understand the effect of the amount of user interaction data available for a query on accuracy, we created subsets of our data (Q1, Q10, and Q20) that contain different amounts of interaction data: • Q1: Human-rated queries with at least 1 click on results recorded (3500 queries, 28,093 query-URL pairs) • Q10: Queries in Q1 with at least 10 clicks (1300 queries, 18,728 query-URL pairs). • Q20: Queries in Q1 with at least 20 clicks (1000 queries total, 12,922 query-URL pairs). These datasets were collected as part of normal user experience and hence have different characteristics than previously reported datasets collected in laboratory settings. Furthermore, the data size is order of magnitude larger than any study reported in the literature. 5.3 Methods Compared We considered a number of methods for comparison. We compared our UserBehavior model (Section 4.3) to previously published implicit feedback interpretation techniques and some variants of these approaches (Section 4.2), and to the current search engine ranking based on query and page features alone (Section 4.1). Specifically, we compare the following strategies: • SA: The Skip Above clickthrough strategy (Section 4.2) • SA+N: A more comprehensive extension of SA that takes better advantage of current search engine ranking. • CD: Our refinement of SA+N that takes advantage of our mixture model of clickthrough distribution to select trusted clicks for interpretation (Section 4.2). • CDiff: Our generalization of the CD strategy that explicitly uses the relevance component of clickthrough probabilities to induce preferences between search results (Section 4.2). • CD+CDiff: The strategy combining CD and CDiff as the union of predicted preferences from both (Section 4.2). • UserBehavior: We order predictions based on decreasing highest score of any page. In our preliminary experiments we observed that higher ranker scores indicate higher confidence in the predictions. This heuristic allows us to do graceful recall-precision tradeoff using the score of the highest ranked result to threshold the queries (Section 4.3) • Current: Current search engine ranking (section 4.1). Note that the Current ranker implementation was trained over a superset of the rated query/URL pairs in our datasets, but using the same truth labels as we do for our evaluation. Training/Test Split: The only strategy for which splitting the datasets into training and test was required was the UserBehavior method. To evaluate UserBehavior we train and validate on 75% of labeled queries, and test on the remaining 25%. The sampling was done per query (i.e., all results for a chosen query were included in the respective dataset, and there was no overlap in queries between training and test sets). It is worth noting that both the ad-hoc SA and SA+N, as well as the distribution-based strategies (CD, CDiff, and CD+CDiff), do not require a separate training and test set, since they are based on heuristics for detecting anomalous click frequencies for results. Hence, all strategies except for UserBehavior were tested on the full set of queries and associated relevance preferences, while UserBehavior was tested on a randomly chosen hold-out subset of the queries as described above. To make sure we are not favoring UserBehavior, we also tested all other strategies on the same hold-out test sets, resulting in the same accuracy results as testing over the complete datasets. 6. RESULTS We now turn to experimental evaluation of predicting relevance preference of web search results. Figure 6.1 shows the recall-precision results over the Q1 query set (Section 5.2). The results indicate that previous click interpretation strategies, SA and SA+N perform suboptimally in this setting, exhibiting precision 0.627 and 0.638 respectively. Furthermore, there is no mechanism to do recall-precision trade-off with SA and SA+N, as they do not provide prediction confidence. In contrast, our clickthrough distribution-based techniques CD and CD+CDiff exhibit somewhat higher precision than SA and SA+N (0.648 and 0.717 at Recall of 0.08, maximum achieved by SA or SA+N). SA+N SA 0.6 0.62 0.64 0.66 0.68 0.7 0.72 0.74 0.76 0.78 0.8 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Recall Precision SA SA+N CD CDiff CD+CDiff UserBehavior Current Figure 6.1: Precision vs. Recall of SA, SA+N, CD, CDiff, CD+CDiff, UserBehavior, and Current relevance prediction methods over the Q1 dataset. Interestingly, CDiff alone exhibits precision equal to SA (0.627) at the same recall at 0.08. In contrast, by combining CD and CDiff strategies (CD+CDiff method) we achieve the best performance of all clickthrough-based strategies, exhibiting precision of above 0.66 for recall values up to 0.14, and higher at lower recall levels. Clearly, aggregating and intelligently interpreting clickthroughs, results in significant gain for realistic web search, than previously described strategies. However, even the CD+CDiff clickthrough interpretation strategy can be improved upon by automatically learning to interpret the aggregated clickthrough evidence. But first, we consider the best performing strategy, UserBehavior. Incorporating post-search navigation history in addition to clickthroughs (Browsing features) results in the highest recall and precision among all methods compared. Browse exhibits precision of above 0.7 at recall of 0.16, significantly outperforming our Baseline and clickthrough-only strategies. Furthermore, Browse is able to achieve high recall (as high as 0.43) while maintaining precision (0.67) significantly higher than the baseline ranking. To further analyze the value of different dimensions of implicit feedback modeled by the UserBehavior strategy, we consider each group of features in isolation. Figure 6.2 reports Precision vs. Recall for each feature group. Interestingly, Query-text alone has low accuracy (only marginally better than Random). Furthermore, Browsing features alone have higher precision (with lower maximum recall achieved) than considering all of the features in our UserBehavior model. Applying different machine learning methods for combining classifier predictions may increase performance of using all features for all recall values. 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.01 0.05 0.09 0.13 0.17 0.21 0.25 0.29 0.33 0.37 0.41 0.45 Recall Precision All Features Clickthrough Query-text Browsing Figure 6.2: Precision vs. recall for predicting relevance with each group of features individually. 0.65 0.67 0.69 0.71 0.73 0.75 0.77 0.79 0.81 0.83 0.85 0.01 0.05 0.09 0.13 0.17 0.21 0.25 0.29 0.33 0.37 0.41 0.45 0.49 Recall Precision CD+CDiff:Q1 UserBehavior:Q1 CD+CDiff:Q10 UserBehavior:Q10 CD+CDiff:Q20 UserBehavior:Q20 Figure 6.3: Recall vs. Precision of CD+CDiff and UserBehavior for query sets Q1, Q10, and Q20 (queries with at least 1, at least 10, and at least 20 clicks respectively). Interestingly, the ranker trained over Clickthrough-only features achieves substantially higher recall and precision than human-designed clickthrough-interpretation strategies described earlier. For example, the clickthrough-trained classifier achieves 0.67 precision at 0.42 Recall vs. the maximum recall of 0.14 achieved by the CD+CDiff strategy. Our clickthrough and user behavior interpretation strategies rely on extensive user interaction data. We consider the effects of having sufficient interaction data available for a query before proposing a re-ranking of results for that query. Figure 6.3 reports recall-precision curves for the CD+CDiff and UserBehavior methods for different test query sets with at least 1 click (Q1), 10 clicks (Q10) and 20 clicks (Q20) available per query. Not surprisingly, CD+CDiff improves with more clicks. This indicates that accuracy will improve as more user interaction histories become available, and more queries from the Q1 set will have comprehensive interaction histories. Similarly, the UserBehavior strategy performs better for queries with 10 and 20 clicks, although the improvement is less dramatic than for CD+CDiff. For queries with sufficient clicks, CD+CDiff exhibits precision comparable with Browse at lower recall. 0 0.05 0.1 0.15 0.2 7 12 17 21 Days of user interaction data harvested Recall CD+CDiff UserBehavior Figure 6.4: Recall of CD+CDiff and UserBehavior strategies at fixed minimum precision 0.7 for varying amounts of user activity data (7, 12, 17, 21 days). Our techniques often do not make relevance predictions for search results (i.e., if no interaction data is available for the lower-ranked results), consequently maintaining higher precision at the expense of recall. In contrast, the current search engine always makes a prediction for every result for a given query. As a consequence, the recall of Current is high (0.627) at the expense of lower precision As another dimension of acquiring training data we consider the learning curve with respect to amount (days) of training data available. Figure 6.4 reports the Recall of CD+CDiff and UserBehavior strategies for varying amounts of training data collected over time. We fixed minimum precision for both strategies at 0.7 as a point substantially higher than the baseline (0.625). As expected, Recall of both strategies improves quickly with more days of interaction data examined. We now briefly summarize our experimental results. We showed that by intelligently aggregating user clickthroughs across queries and users, we can achieve higher accuracy on predicting user preferences. Because of the skewed distribution of user clicks our clickthrough-only strategies have high precision, but low recall (i.e., do not attempt to predict relevance of many search results). Nevertheless, our CD+CDiff clickthrough strategy outperforms most recent state-of-the-art results by a large margin (0.72 precision for CD+CDiff vs. 0.64 for SA+N) at the highest recall level of SA+N. Furthermore, by considering the comprehensive UserBehavior features that model user interactions after the search and beyond the initial click, we can achieve substantially higher precision and recall than considering clickthrough alone. Our UserBehavior strategy achieves recall of over 0.43 with precision of over 0.67 (with much higher precision at lower recall levels), substantially outperforms the current search engine preference ranking and all other implicit feedback interpretation methods. 7. CONCLUSIONS AND FUTURE WORK Our paper is the first, to our knowledge, to interpret postsearch user behavior to estimate user preferences in a real web search setting. We showed that our robust models result in higher prediction accuracy than previously published techniques. We introduced new, robust, probabilistic techniques for interpreting clickthrough evidence by aggregating across users and queries. Our methods result in clickthrough interpretation substantially more accurate than previously published results not specifically designed for web search scenarios. Our methods'' predictions of relevance preferences are substantially more accurate than the current state-of-the-art search result ranking that does not consider user interactions. We also presented a general model for interpreting post-search user behavior that incorporates clickthrough, browsing, and query features. By considering the complete search experience after the initial query and click, we demonstrated prediction accuracy far exceeding that of interpreting only the limited clickthrough information. Furthermore, we showed that automatically learning to interpret user behavior results in substantially better performance than the human-designed ad-hoc clickthrough interpretation strategies. Another benefit of automatically learning to interpret user behavior is that such methods can adapt to changing conditions and changing user profiles. For example, the user behavior model on intranet search may be different from the web search behavior. Our general UserBehavior method would be able to adapt to these changes by automatically learning to map new behavior patterns to explicit relevance ratings. A natural application of our preference prediction models is to improve web search ranking [1]. In addition, our work has many potential applications including click spam detection, search abuse detection, personalization, and domain-specific ranking. For example, our automatically derived behavior models could be trained on examples of search abuse or click spam behavior instead of relevance labels. Alternatively, our models could be used directly to detect anomalies in user behavior - either due to abuse or to operational problems with the search engine. While our techniques perform well on average, our assumptions about clickthrough distributions (and learning the user behavior models) may not hold equally well for all queries. For example, queries with divergent access patterns (e.g., for ambiguous queries with multiple meanings) may result in behavior inconsistent with the model learned for all queries. Hence, clustering queries and learning different predictive models for each query type is a promising research direction. Query distributions also change over time, and it would be productive to investigate how that affects the predictive ability of these models. Furthermore, some predicted preferences may be more valuable than others, and we plan to investigate different metrics to capture the utility of the predicted preferences. As we showed in this paper, using the wisdom of crowds can give us accurate interpretation of user interactions even in the inherently noisy web search setting. Our techniques allow us to automatically predict relevance preferences for web search results with accuracy greater than the previously published methods. The predicted relevance preferences can be used for automatic relevance evaluation and tuning, for deploying search in new settings, and ultimately for improving the overall web search experience. 8. REFERENCES [1] E. Agichtein, E. Brill, and S. Dumais, Improving Web Search Ranking by Incorporating User Behavior, in Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2006 [2] J. Allan. HARD Track Overview in TREC 2003: High Accuracy Retrieval from Documents. In Proceedings of TREC 2003, 24-37, 2004. [3] S. Brin and L. Page, The Anatomy of a Large-scale Hypertextual Web Search Engine,. In Proceedings of WWW7, 107-117, 1998. [4] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender, Learning to Rank using Gradient Descent, in Proceedings of the International Conference on Machine Learning (ICML), 2005 [5] D.M. Chickering, The WinMine Toolkit, Microsoft Technical Report MSR-TR-2002-103, 2002 [6] M. Claypool, D. Brown, P. Lee and M. Waseda. Inferring user interest, in IEEE Internet Computing. 2001 [7] S. Fox, K. Karnawat, M. Mydland, S. T. Dumais and T. White. Evaluating implicit measures to improve the search experience. In ACM Transactions on Information Systems, 2005 [8] J. Goecks and J. Shavlick. Learning users'' interests by unobtrusively observing their normal behavior. In Proceedings of the IJCAI Workshop on Machine Learning for Information Filtering. 1999. [9] T. Joachims, Optimizing Search Engines Using Clickthrough Data, in Proceedings of the ACM Conference on Knowledge Discovery and Datamining (SIGKDD), 2002 [10] T. Joachims, L. Granka, B. Pang, H. Hembrooke and G. Gay, Accurately Interpreting Clickthrough Data as Implicit Feedback, in Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2005 [11] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in Kernel Methods, in Support Vector Learning, MIT Press, 1999 [12] D. Kelly and J. Teevan, Implicit feedback for inferring user preference: A bibliography. In SIGIR Forum, 2003 [13] J. Konstan, B. Miller, D. Maltz, J. Herlocker, L. Gordon and J. Riedl. GroupLens: Applying collaborative filtering to usenet news. In Communications of ACM, 1997. [14] M. Morita, and Y. Shinoda, Information filtering based on user behavior analysis and best match text retrieval. In Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 1994 [15] D. Oard and J. Kim. Implicit feedback for recommender systems. in Proceedings of AAAI Workshop on Recommender Systems. 1998 [16] D. Oard and J. Kim. Modeling information content using observable behavior. In Proceedings of the 64th Annual Meeting of the American Society for Information Science and Technology. 2001 [17] P. Pirolli, The Use of Proximal Information Scent to Forage for Distal Content on the World Wide Web. In Working with Technology in Mind: Brunswikian. Resources for Cognitive Science and Engineering, Oxford University Press, 2004 [18] F. Radlinski and T. Joachims, Query Chains: Learning to Rank from Implicit Feedback, in Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), ACM, 2005 [19] F. Radlinski and T. Joachims, Evaluating the Robustness of Learning from Implicit Feedback, in the ICML Workshop on Learning in Web Search, 2005 [20] G. Salton and M. McGill. Introduction to modern information retrieval. McGraw-Hill, 1983 [21] E.M. Voorhees, D. Harman, Overview of TREC, 2001
Learning User Interaction Models for Predicting Web Search Result Preferences ABSTRACT Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected "noisy" user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods. 1. INTRODUCTION Relevance measurement is crucial to web search and to information retrieval in general. Traditionally, search relevance is measured by using human assessors to judge the relevance of querydocument pairs. However, explicit human ratings are expensive and difficult to obtain. At the same time, millions of people interact daily with web search engines, providing valuable implicit feedback through their interactions with the search results. If we could turn these interactions into relevance judgments, we could obtain large amounts of data for evaluating, maintaining, and improving information retrieval systems. Recently, automatic or implicit relevance feedback has developed into an active area of research in the information retrieval community, at least in part due to an increase in available resources and to the rising popularity of web search. However, most traditional IR work was performed over controlled test collections and carefully-selected query sets and tasks. Therefore, it is not clear whether these techniques will work for general real-world web search. A significant distinction is that web search is not controlled. Individual users may behave irrationally or maliciously, or may not even be real users; all of this affects the data that can be gathered. But the amount of the user interaction data is orders of magnitude larger than anything available in a non-web-search setting. By using the aggregated behavior of large numbers of users (and not treating each user as an individual "expert") we can correct for the noise inherent in individual interactions, and generate relevance judgments that are more accurate than techniques not specifically designed for the web search setting. Furthermore, observations and insights obtained in laboratory settings do not necessarily translate to real world usage. Hence, it is preferable to automatically induce feedback interpretation strategies from large amounts of user interactions. Automatically learning to interpret user behavior would allow systems to adapt to changing conditions, changing user behavior patterns, and different search settings. We present techniques to automatically interpret the collective behavior of users interacting with a web search engine to predict user preferences for search results. Our contributions include: • A distributional model of user behavior, robust to noise within individual user sessions, that can recover relevance preferences from user interactions (Section 3). • Extensions of existing clickthrough strategies to include richer browsing and interaction features (Section 4). • A thorough evaluation of our user behavior models, as well as of previously published state-of-the-art techniques, over a large set of web search sessions (Sections 5 and 6). We discuss our results and outline future directions and various applications of this work in Section 7, which concludes the paper. 2. BACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval. The most common approaches in the context of the web use both the similarity of the query to the page content, and the overall quality of a page [3, 201. A state-of-the-art search engine may use hundreds of features to describe a candidate page, employing sophisticated algorithms to rank pages based on these features. Current search engines are commonly tuned on human relevance judgments. Human annotators rate a set of pages for a query according to perceived relevance, creating the "gold standard" against which different ranking algorithms can be evaluated. Reducing the dependence on explicit human judgments by using implicit relevance feedback has been an active topic of research. Several research groups have evaluated the relationship between implicit measures and user interest. In these studies, both reading time and explicit ratings of interest are collected. Morita and Shinoda [14] studied the amount of time that users spent reading Usenet news articles and found that reading time could predict a user's interest levels. Konstan et al. [13] showed that reading time was a strong predictor of user interest in their GroupLens system. Oard and Kim [15] studied whether implicit feedback could substitute for explicit ratings in recommender systems. More recently, Oard and Kim [16] presented a framework for characterizing observable user behaviors using two dimensions--the underlying purpose of the observed behavior and the scope of the item being acted upon. Goecks and Shavlik [8] approximated human labels by collecting a set of page activity measures while users browsed the World Wide Web. The authors hypothesized correlations between a high degree of page activity and a user's interest. While the results were promising, the sample size was small and the implicit measures were not tested against explicit judgments of user interest. Claypool et al. [6] studied how several implicit measures related to the interests of the user. They developed a custom browser called the Curious Browser to gather data, in a computer lab, about implicit interest indicators and to probe for explicit judgments of Web pages visited. Claypool et al. found that the time spent on a page, the amount of scrolling on a page, and the combination of time and scrolling have a strong positive relationship with explicit interest, while individual scrolling methods and mouse-clicks were not correlated with explicit interest. Fox et al. [7] explored the relationship between implicit and explicit measures in Web search. They built an instrumented browser to collect data and then developed Bayesian models to relate implicit measures and explicit relevance judgments for both individual queries and search sessions. They found that clickthrough was the most important individual variable but that predictive accuracy could be improved by using additional variables, notably dwell time on a page. Joachims [9] developed valuable insights into the collection of implicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions. More recently, Joachims et al. [10] presented an empirical evaluation of interpreting clickthrough evidence. By performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthrough events in a controlled, laboratory setting. A more comprehensive overview of studies of implicit measures is described in Kelly and Teevan [12]. Unfortunately, the extent to which existing research applies to real-world web search is unclear. In this paper, we build on previous research to develop robust user behavior interpretation models for the real web search setting. 3. LEARNING USER BEHAVIOR MODELS As we noted earlier, real web search user behavior can be "noisy" in the sense that user behaviors are only probabilistically related to explicit relevance judgments and preferences. Hence, instead of treating each user as a reliable "expert", we aggregate information from many unreliable user search session traces. Our main approach is to model user web search behavior as if it were generated by two components: a "relevance" component--query-specific behavior influenced by the apparent result relevance, and a "background" component--users clicking indiscriminately. Our general idea is to model the deviations from the expected user behavior. Hence, in addition to basic features, which we will describe in detail in Section 3.2, we compute derived features that measure the deviation of the observed feature value for a given search result from the expected values for a result, with no query-dependent information. We motivate our intuitions with a particularly important behavior feature, result clickthrough, analyzed next, and then introduce our general model of user behavior that incorporates other user actions (Section 3.2). 3.1 A Case Study in Click Distributions As we discussed, we aggregate statistics across many user sessions. A click on a result may mean that some user found the result summary promising; it could also be caused by people clicking indiscriminately. In general, individual user behavior, clickthrough and otherwise, is noisy, and cannot be relied upon for accurate relevance judgments. The data set is described in more detail in Section 5.2. For the present it suffices to note that we focus on a random sample of 3,500 queries that were randomly sampled from query logs. For these queries we aggregate click data over more than 120,000 searches performed over a three week period. We also have explicit relevance judgments for the top 10 results for each query. Figure 3.1 shows the relative clickthrough frequency as a function of result position. The aggregated click frequency at result position p is calculated by first computing the frequency of a click at p for each query (i.e., approximating the probability that a randomly chosen click for that query would land on position p). These frequencies are then averaged across queries and normalized so that relative frequency of a click at the top position is 1. The resulting distribution agrees with previous observations that users click more often on top-ranked results. This reflects the fact that search engines do a reasonable job of ranking results as well as biases to click top results and noise--we attempt to separate these components in the analysis that follows. Figure 3.1: Relative click frequency for top 30 result positions over 3,500 queries and 120,000 searches. First we consider the distribution of clicks for the relevant documents for these queries. Figure 3.2 reports the aggregated click distribution for queries with varying Position of Top Relevant document (PTR). While there are many clicks above the first relevant document for each distribution, there are clearly "peaks" in click frequency for the first relevant result. For example, for queries with top relevant result in position 2, the relative click frequency at that position (second bar) is higher than the click frequency at other positions for these queries. Nevertheless, many users still click on the non-relevant results in position 1 for such queries. This shows a stronger property of the bias in the click distribution towards top results--users click more often on results that are ranked higher, even when they are not relevant. Figure 3.2: Relative click frequency for queries with varying PTR (Position of Top Relevant document). Figure 3.3: Relative corrected click frequency for relevant documents with varying PTR (Position of Top Relevant). If we subtract the background distribution of Figure 3.1 from the "mixed" distribution of Figure 3.2, we obtain the distribution in Figure 3.3, where the remaining click frequency distribution can be interpreted as the relevance component of the results. Note that the corrected click distribution correlates closely with actual result relevance as explicitly rated by human judges. 3.2 Robust User Behavior Model Clicks on search results comprise only a small fraction of the postsearch activities typically performed by users. We now introduce our techniques for going beyond the clickthrough statistics and explicitly modeling post-search user behavior. Although clickthrough distributions are heavily biased towards top results, we have just shown how the ` relevance-driven' click distribution can be recovered by correcting for the prior, background distribution. We conjecture that other aspects of user behavior (e.g., page dwell time) are similarly distorted. Our general model includes two feature types for describing user behavior: direct and deviational where the former is the directly measured values, and latter is deviation from the expected values estimated from the overall (query-independent) distributions for the corresponding directly observed features. More formally, we postulate that the observed value o of a feature f for a query q and result r can be expressed as a mixture of two components: o (q, r, f) = C (f) + rel (q, r, f) (1) where C (f) is the prior "background" distribution for values off aggregated across all queries, and rel (q, r, f is the component of the behavior influenced by the relevance of the result r. As illustrated above with the clickthrough feature, if we subtract the background distribution (i.e., the expected clickthrough for a result at a given position) from the observed clickthrough frequency at a given position, we can approximate the relevance component of the clickthrough value1. In order to reduce the effect of individual user variations in behavior, we average observed feature values across all users and search sessions for each query-URL pair. This aggregation gives additional robustness of not relying on individual "noisy" user interactions. In summary, the user behavior for a query-URL pair is represented by a feature vector that includes both the directly observed features and the derived, "corrected" feature values. We now describe the actual features we use to represent user behavior. 3.3 Features for Representing User Behavior Our goal is to devise a sufficiently rich set of features that allow us to characterize when a user will be satisfied with a web search result. Once the user has submitted a query, they perform many different actions (reading snippets, clicking results, navigating, refining their query) which we capture and summarize. This information was obtained via opt-in client-side instrumentation from users of a major web search engine. This rich representation of user behavior is similar in many respects to the recent work by Fox et al. [7]. An important difference is that many of our features are (by design) query specific whereas theirs was (by design) a general, query-independent model of user behavior. Furthermore, we include derived, distributional features computed as described above. The features we use to represent user search interactions are summarized in Table 3.1. For clarity, we organize the features into the groups Query-text, Clickthrough, and Browsing. Query-text features: Users decide which results to examine in more detail by looking at the result title, URL, and summary--in some cases, looking at the original document is not even necessary. To model this aspect of user experience we defined features to characterize the nature of the query and its relation to the snippet text. These include features such as overlap between the words in title and in query (TitleOverlap), the fraction of words shared by the query and the result summary (SummaryOverlap), etc. . Browsing features: Simple aspects of the user web page interactions can be captured and quantified. These features are used to characterize interactions with pages beyond the results page. For example, we compute how long users dwell on a page (TimeOnPage) or domain (TimeOnDomain), and the deviation of dwell time from expected page dwell time for a query. These features allows us to model intra - query diversity of page browsing behavior (e.g., navigational queries, on average, are likely to have shorter page dwell time than transactional or informational queries). We include both the direct features and the derived features described above. Clickthrough features: Clicks are a special case of user interaction with the search engine. We include all the features necessary to "learn" the clickthrough-based strategies described in Sections 4.1 and 4.4. For example, for a query-URL pair we provide the number of clicks for the result (ClickFrequency), as well as whether there was a click on result below or above the current URL (IsClickBelow, IsClickAbove). The derived feature values such as ClickRelativeFrequency and ClickDeviation are computed as described in Equation 1. Table 3.1: Features used to represent post-search interactions for a given query and search result URL 3.4 Learning a Predictive Behavior Model Having described our features, we now turn to the actual method of mapping the features to user preferences. We attempt to learn a general implicit feedback interpretation strategy automatically instead of relying on heuristics or insights. We consider this approach to be preferable to heuristic strategies, because we can always mine more data instead of relying (only) on our intuition and limited laboratory evidence. Our general approach is to train a classifier to induce weights for the user behavior features, and consequently derive a predictive model of user preferences. The training is done by comparing a wide range of implicit behavior measures with explicit user judgments for a set of queries. For this, we use a large random sample of queries in the search query log of a popular web search engine, the sets of results (identified by URLs) returned for each of the queries, and any explicit relevance judgments available for each query/result pair. We can then analyze the user behavior for all the instances where these queries were submitted to the search engine. To learn the mapping from features to relevance preferences, we use a scalable implementation of neural networks, RankNet [4], capable of learning to rank a set of given items. More specifically, for each judged query we check if a result link has been judged. If so, the label is assigned to the query/URL pair and to the corresponding feature vector for that search result. These vectors of feature values corresponding to URLs judged relevant or non-relevant by human annotators become our training set. RankNet has demonstrated excellent performance in learning to rank objects in a supervised setting, hence we use RankNet for our experiments. 4. PREDICTING USER PREFERENCES In our experiments, we explore several models for predicting user preferences. These models range from using no implicit user feedback to using all available implicit user feedback. Ranking search results to predict user preferences is a fundamental problem in information retrieval. Most traditional IR and web search approaches use a combination of page and link features to rank search results, and a representative state-of-the-art ranking system will be used as our baseline ranker (Section 4.1). At the same time, user interactions with a search engine provide a wealth of information. A commonly considered type of interaction is user clicks on search results. Previous work [9], as described above, also examined which results were skipped (e.g., ` skip above' and ` skip next') and other related strategies to induce preference judgments from the users' skipping over results and not clicking on following results. We have also added refinements of these strategies to take into account the variability observed in realistic web scenarios. . We describe these strategies in Section 4.2. As clickthroughs are just one aspect of user interaction, we extend the relevance estimation by introducing a machine learning model that incorporates clicks as well as other aspects of user behavior, such as follow-up queries and page dwell time (Section 4.3). We conclude this section by briefly describing our "baseline"--a state-of-the-art ranking algorithm used by an operational web search engine. 4.1 Baseline Model A key question is whether browsing behavior can provide information absent from existing explicit judgments used to train an existing ranker. For our baseline system we use a state-of-the-art page ranking system currently used by a major web search engine. Hence, we will call this system Current for the subsequent discussion. While the specific algorithms used by the search engine are beyond the scope of this paper, the algorithm ranks results based on hundreds of features such as query to document similarity, query to anchor text similarity, and intrinsic page quality. The Current web search engine rankings provide a strong system for comparison and experiments of the next two sections. 4.2 Clickthrough Model If we assume that every user click was motivated by a rational process that selected the most promising result summary, we can then interpret each click as described in Joachims et al. [10]. By studying eye tracking and comparing clicks with explicit judgments, they identified a few basic strategies. We discuss the two strategies that performed best in their experiments, Skip Above and Skip Next. Strategy SA (Skip Above): For a set of results for a query and a clicked result at position p, all unclicked results ranked above p are predicted to be less relevant than the result at p. In addition to information about results above the clicked result, we also have information about the result immediately following the clicked one. Eye tracking study performed by Joachims et al. [10] showed that users usually consider the result immediately following the clicked result in current ranking. Their "Skip Next" strategy uses this observation to predict that a result following the clicked result at p is less relevant than the clicked result, with accuracy comparable to the SA strategy above. For better coverage, we combine the SA strategy with this extension to derive the Skip Above + Skip Next strategy: Strategy SA+N (Sldp Above + Sldp Next): This strategy predicts all un-clicked results immediately following a clicked result as less relevant than the clicked result, and combines these predictions with those of the SA strategy above. We experimented with variations of these strategies, and found that SA+N outperformed both SA and the original Skip Next strategy, so we will consider the SA and SA+N strategies in the rest of the paper. These strategies are motivated and empirically tested for individual users in a laboratory setting. As we will show, these strategies do not work as well in real web search setting due to inherent inconsistency and noisiness of individual users' behavior. The general approach for using our clickthrough models directly is to filter clicks to those that reflect higher-than-chance click frequency. We then use the same SA and SA+N strategies, but only for clicks that have higher-than-expected frequency according to our model. For this, we estimate the relevance component rel (q, r, f) of the observed clickthrough feature f as the deviation from the expected (background) clickthrough distribution C (f). Strategy CD (deviation d): For a given query, compute the observed click frequency distribution o (r, p) for all results r in positions p. The click deviation for a result r in position p, dev (r, p) is computed as: dev (r, p) = o (r, p)--C (p) where C (p) is the expected clickthrough at position p. If dev (r, p)> d, retain the click as input to the SA+N strategy above, and apply SA+N strategy over the filtered set of click events. The choice of d selects the tradeoff between recall and precision. While the above strategy extends SA and SA+N, it still assumes that a (filtered) clicked result is preferred over all unclicked results presented to the user above a clicked position. However, for informational queries, multiple results may be clicked, with varying frequency. Hence, it is preferable to individually compare results for a query by considering the difference between the estimated relevance components of the click distribution of the corresponding query results. We now define a generalization of the previous clickthrough interpretation strategy: Strategy CDiff (margin m): Compute deviation dev (r, p) for each result r,...rn in position p. For each pair of results ri and rj, predict preference of ri over rj iff dev (ri, pi) - dev (ri, pj)> m. As in CD, the choice of m selects the tradeoff between recall and precision. The pairs may be preferred in the original order or in reverse of it. Given the margin, two results might be effectively indistinguishable, but only one can possibly be preferred over the other. Intuitively, CDiff generalizes the skip idea above to include cases where the user "skipped" (i.e., clicked less than expected) on uj and "preferred" (i.e., clicked more than expected) on ui. Furthermore, this strategy allows for differentiation within the set of clicked results, making it more appropriate to noisy user behavior. CDiff and CD are complimentary. CDiff is a generalization of the clickthrough frequency model of CD, but it ignores the positional information used in CD. Hence, combining the two strategies to improve coverage is a natural approach: Strategy CD+CD iff (deviation d, margin m): Union of CD and CDiff predictions. Other variations of the above strategies were considered, but these five methods cover the range of observed performance. 4.3 General User Behavior Model The strategies described in the previous section generate orderings based solely on observed clickthrough frequencies. As we discussed, clickthrough is just one, albeit important, aspect of user interactions with web search engine results. We now present our general strategy that relies on the automatically derived predictive user behavior models (Section 3). The UserBehavior Strategy: For a given query, each result is represented with the features in Table 3.1. Relative user preferences are then estimated using the learned user behavior model described in Section 3.4. Recall that to learn a predictive behavior model we used the features from Table 3.1 along with explicit relevance judgments as input to RankNet which learns an optimal weighting of features to predict preferences. This strategy models user interaction with the search engine, allowing it to benefit from the wisdom of crowds interacting with the results and the pages beyond. As our experiments in the subsequent sections demonstrate, modeling a richer set of user interactions beyond clickthroughs results in more accurate predictions of user preferences. 5. EXPERIMENTAL SETUP We now describe our experimental setup. We first describe the methodology used, including our evaluation metrics (Section 5.1). Then we describe the datasets (Section 5.2) and the methods we compared in this study (Section 5.3). 5.1 Evaluation Methodology and Metrics Our evaluation focuses on the pairwise agreement between preferences for results. This allows us to compare to previous work [9,10]. Furthermore, for many applications such as tuning ranking functions, pairwise preference can be used directly for training [1,4,9]. The evaluation is based on comparing preferences predicted by various models to the "correct" preferences derived from the explicit user relevance judgments. We discuss other applications of our models beyond web search ranking in Section 7. To create our set of "test" pairs we take each query and compute the cross-product between all search results, returning preferences for pairs according to the order of the associated relevance labels. To avoid ambiguity in evaluation, we discard all ties (i.e., pairs with equal label). In order to compute the accuracy of our preference predictions with respect to the correct preferences, we adapt the standard Recall and Precision measures [20]. While our task of computing pairwise agreement is different from the absolute relevance ranking task, the metrics are used in the similar way. Specifically, we report the average query recall and precision. For our task, Query Precision and Query Recall for a query q are defined as: • Query Precision: Fraction of predicted preferences for results for q that agree with preferences obtained from explicit human judgment. • Query Recall: Fraction of preferences obtained from explicit human judgment for q that were correctly predicted. The overall Recall and Precision are computed as the average of Query Recall and Query Precision, respectively. A drawback of this evaluation measure is that some preferences may be more valuable than others, which pairwise agreement does not capture. We discuss this issue further when we consider extensions to the current work in Section 7. 5.2 Datasets For evaluation we used 3,500 queries that were randomly sampled from query logs (for a major web search engine. For each query the top 10 returned search results were manually rated on a 6-point scale by trained judges as part of ongoing relevance improvement effort. In addition for these queries we also had user interaction data for more than 120,000 instances of these queries. The user interactions were harvested from anonymous browsing traces that immediately followed a query submitted to the web search engine. This data collection was part of voluntary opt-in feedback submitted by users from October 11 through October 31. These three weeks (21 days) of user interaction data was filtered to include only the users in the English-U.S. market. In order to better understand the effect of the amount of user interaction data available for a query on accuracy, we created subsets of our data (Q1, Q10, and Q20) that contain different amounts of interaction data: • Q1: Human-rated queries with at least 1 click on results recorded (3500 queries, 28,093 query-URL pairs) • Q10: Queries in Q1 with at least 10 clicks (1300 queries, 18,728 query-URL pairs). • Q20: Queries in Q1 with at least 20 clicks (1000 queries total, 12,922 query-URL pairs). These datasets were collected as part of normal user experience and hence have different characteristics than previously reported datasets collected in laboratory settings. Furthermore, the data size is order of magnitude larger than any study reported in the literature. 5.3 Methods Compared We considered a number of methods for comparison. We compared our UserBehavior model (Section 4.3) to previously published implicit feedback interpretation techniques and some variants of these approaches (Section 4.2), and to the current search engine ranking based on query and page features alone (Section 4.1). Specifically, we compare the following strategies: • SA: The "Skip Above" clickthrough strategy (Section 4.2) • SA+N: A more comprehensive extension of SA that takes better advantage of current search engine ranking. • CD: Our refinement of SA+N that takes advantage of our mixture model of clickthrough distribution to select "trusted" clicks for interpretation (Section 4.2). • CDiff: Our generalization of the CD strategy that explicitly uses the relevance component of clickthrough probabilities to induce preferences between search results (Section 4.2). • CD+CD iff: The strategy combining CD and CDiff as the union of predicted preferences from both (Section 4.2). • UserBehavior: We order predictions based on decreasing highest score of any page. In our preliminary experiments we observed that higher ranker scores indicate higher "confidence" in the predictions. This heuristic allows us to do graceful recallprecision tradeoff using the score of the highest ranked result to threshold the queries (Section 4.3) • Current: Current search engine ranking (section 4.1). Note that the Current ranker implementation was trained over a superset of the rated query/URL pairs in our datasets, but using the same "truth" labels as we do for our evaluation. Training/Test Split: The only strategy for which splitting the datasets into training and test was required was the UserBehavior method. To evaluate UserBehavior we train and validate on 75% of labeled queries, and test on the remaining 25%. The sampling was done per query (i.e., all results for a chosen query were included in the respective dataset, and there was no overlap in queries between training and test sets). It is worth noting that both the ad-hoc SA and SA+N, as well as the distribution-based strategies (CD, CDiff, and CD+CD iff), do not require a separate training and test set, since they are based on heuristics for detecting "anomalous" click frequencies for results. Hence, all strategies except for UserBehavior were tested on the full set of queries and associated relevance preferences, while UserBehavior was tested on a randomly chosen hold-out subset of the queries as described above. To make sure we are not favoring UserBehavior, we also tested all other strategies on the same holdout test sets, resulting in the same accuracy results as testing over the complete datasets. 6. RESULTS We now turn to experimental evaluation of predicting relevance preference of web search results. Figure 6.1 shows the recallprecision results over the Q1 query set (Section 5.2). The results indicate that previous click interpretation strategies, SA and SA+N perform suboptimally in this setting, exhibiting precision 0.627 and 0.638 respectively. Furthermore, there is no mechanism to do recallprecision trade-off with SA and SA+N, as they do not provide prediction confidence. In contrast, our clickthrough distributionbased techniques CD and CD+CD iff exhibit somewhat higher precision than SA and SA+N (0.648 and 0.717 at Recall of 0.08, maximum achieved by SA or SA+N). Figure 6.1: Precision vs. Recall of SA, SA+N, CD, CDiff, CD+CD iff, UserBehavior, and Current relevance prediction methods over the Q1 dataset. Interestingly, CDiff alone exhibits precision equal to SA (0.627) at the same recall at 0.08. In contrast, by combining CD and CDiff strategies (CD+CD iff method) we achieve the best performance of all clickthrough-based strategies, exhibiting precision of above 0.66 for recall values up to 0.14, and higher at lower recall levels. Clearly, aggregating and intelligently interpreting clickthroughs, results in significant gain for realistic web search, than previously described strategies. However, even the CD+CD iff clickthrough interpretation strategy can be improved upon by automatically learning to interpret the aggregated clickthrough evidence. But first, we consider the best performing strategy, UserBehavior. Incorporating post-search navigation history in addition to clickthroughs (Browsing features) results in the highest recall and precision among all methods compared. Browse exhibits precision of above 0.7 at recall of 0.16, significantly outperforming our Baseline and clickthrough-only strategies. Furthermore, Browse Pre is able to achieve high recall (as high as 0.43) while maintaining precision (0.67) significantly higher than the baseline ranking. To further analyze the value of different dimensions of implicit feedback modeled by the UserBehavior strategy, we consider each group of features in isolation. Figure 6.2 reports Precision vs. Recall for each feature group. Interestingly, Query-text alone has low accuracy (only marginally better than Random). Furthermore, Browsing features alone have higher precision (with lower maximum recall achieved) than considering all of the features in our UserBehavior model. Applying different machine learning methods for combining classifier predictions may increase performance of using all features for all recall values. Figure 6.2: Precision vs. recall for predicting relevance with each group of features individually. Figure 6.3: Recall vs. Precision of CD+CD iff and UserBehavior for query sets Q1, Q10, and Q20 (queries with at least 1, at least 10, and at least 20 clicks respectively). Interestingly, the ranker trained over Clickthrough-only features achieves substantially higher recall and precision than humandesigned clickthrough-interpretation strategies described earlier. For example, the clickthrough-trained classifier achieves 0.67 precision at 0.42 Recall vs. the maximum recall of 0.14 achieved by the CD+CD iff strategy. Our clickthrough and user behavior interpretation strategies rely on extensive user interaction data. We consider the effects of having sufficient interaction data available for a query before proposing a reranking of results for that query. Figure 6.3 reports recall-precision curves for the CD+CD iff and UserBehavior methods for different test query sets with at least 1 click (Q1), 10 clicks (Q10) and 20 clicks (Q20) available per query. Not surprisingly, CD+CD iff improves with more clicks. This indicates that accuracy will improve as more user interaction histories become available, and more queries from the Q1 set will have comprehensive interaction histories. Similarly, the UserBehavior strategy performs better for queries with 10 and 20 clicks, although the improvement is less dramatic than for CD+CD iff. For queries with sufficient clicks, CD+CD iff exhibits precision comparable with Browse at lower recall. Figure 6.4: Recall of CD+CD iff and UserBehavior strategies at fixed minimum precision 0.7 for varying amounts of user activity data (7, 12, 17, 21 days). Our techniques often do not make relevance predictions for search results (i.e., if no interaction data is available for the lowerranked results), consequently maintaining higher precision at the expense of recall. In contrast, the current search engine always makes a prediction for every result for a given query. As a consequence, the recall of Current is high (0.627) at the expense of lower precision As another dimension of acquiring training data we consider the learning curve with respect to amount (days) of training data available. Figure 6.4 reports the Recall of CD+CD iff and UserBehavior strategies for varying amounts of training data collected over time. We fixed minimum precision for both strategies at 0.7 as a point substantially higher than the baseline (0.625). As expected, Recall of both strategies improves quickly with more days of interaction data examined. We now briefly summarize our experimental results. We showed that by intelligently aggregating user clickthroughs across queries and users, we can achieve higher accuracy on predicting user preferences than previous strategies. Because of the skewed distribution of user clicks, our clickthrough-only strategies have high precision, but low recall (i.e., do not attempt to predict relevance of many search results). Nevertheless, our CD+CD iff clickthrough strategy outperforms most recent state-of-the-art results by a large margin (0.72 precision for CD+CD iff vs. 0.64 for SA+N) at the highest recall level of SA+N. Furthermore, by considering the comprehensive UserBehavior features that model user interactions after the search and beyond the initial click, we can achieve substantially higher precision and recall than considering clickthrough alone. Our UserBehavior strategy achieves recall of over 0.43 with precision of over 0.67 (with much higher precision at lower recall levels), substantially outperforming the current search engine preference ranking and all other implicit feedback interpretation methods. 7. CONCLUSIONS AND FUTURE WORK Our paper is the first, to our knowledge, to interpret post-search user behavior to estimate user preferences in a real web search setting. We showed that our robust models result in higher prediction accuracy than previously published techniques. We introduced new, robust, probabilistic techniques for interpreting clickthrough evidence by aggregating across users and queries. Our methods result in clickthrough interpretation substantially more accurate than previously published results not specifically designed for web search scenarios. Our methods' predictions of relevance preferences are substantially more accurate than the current state-of-the-art search result ranking that does not consider user interactions. We also presented a general model for interpreting post-search user behavior that incorporates clickthrough, browsing, and query features. By considering the complete search experience after the initial query and click, we demonstrated prediction accuracy far exceeding that of interpreting only the limited clickthrough information. Furthermore, we showed that automatically learning to interpret user behavior results in substantially better performance than the human-designed ad-hoc clickthrough interpretation strategies. Another benefit of automatically learning to interpret user behavior is that such methods can adapt to changing conditions and changing user profiles. For example, the user behavior model on intranet search may be different from the web search behavior. Our general UserBehavior method would be able to adapt to these changes by automatically learning to map new behavior patterns to explicit relevance ratings. A natural application of our preference prediction models is to improve web search ranking [1]. In addition, our work has many potential applications including click spam detection, search abuse detection, personalization, and domain-specific ranking. For example, our automatically derived behavior models could be trained on examples of search abuse or click spam behavior instead of relevance labels. Alternatively, our models could be used directly to detect anomalies in user behavior--either due to abuse or to operational problems with the search engine. While our techniques perform well on average, our assumptions about clickthrough distributions (and learning the user behavior models) may not hold equally well for all queries. For example, queries with divergent access patterns (e.g., for ambiguous queries with multiple meanings) may result in behavior inconsistent with the model learned for all queries. Hence, clustering queries and learning different predictive models for each query type is a promising research direction. Query distributions also change over time, and it would be productive to investigate how that affects the predictive ability of these models. Furthermore, some predicted preferences may be more valuable than others, and we plan to investigate different metrics to capture the utility of the predicted preferences. As we showed in this paper, using the "wisdom of crowds" can give us an accurate interpretation of user interactions even in the inherently noisy web search setting. Our techniques allow us to automatically predict relevance preferences for web search results with accuracy greater than the previously published methods. The predicted relevance preferences can be used for automatic relevance evaluation and tuning, for deploying search in new settings, and ultimately for improving the overall web search experience.
Learning User Interaction Models for Predicting Web Search Result Preferences ABSTRACT Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected "noisy" user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods. 1. INTRODUCTION Relevance measurement is crucial to web search and to information retrieval in general. Traditionally, search relevance is measured by using human assessors to judge the relevance of querydocument pairs. However, explicit human ratings are expensive and difficult to obtain. At the same time, millions of people interact daily with web search engines, providing valuable implicit feedback through their interactions with the search results. If we could turn these interactions into relevance judgments, we could obtain large amounts of data for evaluating, maintaining, and improving information retrieval systems. Recently, automatic or implicit relevance feedback has developed into an active area of research in the information retrieval community, at least in part due to an increase in available resources and to the rising popularity of web search. However, most traditional IR work was performed over controlled test collections and carefully-selected query sets and tasks. Therefore, it is not clear whether these techniques will work for general real-world web search. A significant distinction is that web search is not controlled. Individual users may behave irrationally or maliciously, or may not even be real users; all of this affects the data that can be gathered. But the amount of the user interaction data is orders of magnitude larger than anything available in a non-web-search setting. By using the aggregated behavior of large numbers of users (and not treating each user as an individual "expert") we can correct for the noise inherent in individual interactions, and generate relevance judgments that are more accurate than techniques not specifically designed for the web search setting. Furthermore, observations and insights obtained in laboratory settings do not necessarily translate to real world usage. Hence, it is preferable to automatically induce feedback interpretation strategies from large amounts of user interactions. Automatically learning to interpret user behavior would allow systems to adapt to changing conditions, changing user behavior patterns, and different search settings. We present techniques to automatically interpret the collective behavior of users interacting with a web search engine to predict user preferences for search results. Our contributions include: • A distributional model of user behavior, robust to noise within individual user sessions, that can recover relevance preferences from user interactions (Section 3). • Extensions of existing clickthrough strategies to include richer browsing and interaction features (Section 4). • A thorough evaluation of our user behavior models, as well as of previously published state-of-the-art techniques, over a large set of web search sessions (Sections 5 and 6). We discuss our results and outline future directions and various applications of this work in Section 7, which concludes the paper. 2. BACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval. The most common approaches in the context of the web use both the similarity of the query to the page content, and the overall quality of a page [3, 201. A state-of-the-art search engine may use hundreds of features to describe a candidate page, employing sophisticated algorithms to rank pages based on these features. Current search engines are commonly tuned on human relevance judgments. Human annotators rate a set of pages for a query according to perceived relevance, creating the "gold standard" against which different ranking algorithms can be evaluated. Reducing the dependence on explicit human judgments by using implicit relevance feedback has been an active topic of research. Several research groups have evaluated the relationship between implicit measures and user interest. In these studies, both reading time and explicit ratings of interest are collected. Morita and Shinoda [14] studied the amount of time that users spent reading Usenet news articles and found that reading time could predict a user's interest levels. Konstan et al. [13] showed that reading time was a strong predictor of user interest in their GroupLens system. Oard and Kim [15] studied whether implicit feedback could substitute for explicit ratings in recommender systems. More recently, Oard and Kim [16] presented a framework for characterizing observable user behaviors using two dimensions--the underlying purpose of the observed behavior and the scope of the item being acted upon. Goecks and Shavlik [8] approximated human labels by collecting a set of page activity measures while users browsed the World Wide Web. The authors hypothesized correlations between a high degree of page activity and a user's interest. While the results were promising, the sample size was small and the implicit measures were not tested against explicit judgments of user interest. Claypool et al. [6] studied how several implicit measures related to the interests of the user. They developed a custom browser called the Curious Browser to gather data, in a computer lab, about implicit interest indicators and to probe for explicit judgments of Web pages visited. Claypool et al. found that the time spent on a page, the amount of scrolling on a page, and the combination of time and scrolling have a strong positive relationship with explicit interest, while individual scrolling methods and mouse-clicks were not correlated with explicit interest. Fox et al. [7] explored the relationship between implicit and explicit measures in Web search. They built an instrumented browser to collect data and then developed Bayesian models to relate implicit measures and explicit relevance judgments for both individual queries and search sessions. They found that clickthrough was the most important individual variable but that predictive accuracy could be improved by using additional variables, notably dwell time on a page. Joachims [9] developed valuable insights into the collection of implicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions. More recently, Joachims et al. [10] presented an empirical evaluation of interpreting clickthrough evidence. By performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthrough events in a controlled, laboratory setting. A more comprehensive overview of studies of implicit measures is described in Kelly and Teevan [12]. Unfortunately, the extent to which existing research applies to real-world web search is unclear. In this paper, we build on previous research to develop robust user behavior interpretation models for the real web search setting. 3. LEARNING USER BEHAVIOR MODELS 3.1 A Case Study in Click Distributions 3.2 Robust User Behavior Model 3.3 Features for Representing User Behavior 3.4 Learning a Predictive Behavior Model 4. PREDICTING USER PREFERENCES 4.1 Baseline Model 4.2 Clickthrough Model 4.3 General User Behavior Model 5. EXPERIMENTAL SETUP 5.1 Evaluation Methodology and Metrics 5.2 Datasets 5.3 Methods Compared 6. RESULTS 7. CONCLUSIONS AND FUTURE WORK Our paper is the first, to our knowledge, to interpret post-search user behavior to estimate user preferences in a real web search setting. We showed that our robust models result in higher prediction accuracy than previously published techniques. We introduced new, robust, probabilistic techniques for interpreting clickthrough evidence by aggregating across users and queries. Our methods result in clickthrough interpretation substantially more accurate than previously published results not specifically designed for web search scenarios. Our methods' predictions of relevance preferences are substantially more accurate than the current state-of-the-art search result ranking that does not consider user interactions. We also presented a general model for interpreting post-search user behavior that incorporates clickthrough, browsing, and query features. By considering the complete search experience after the initial query and click, we demonstrated prediction accuracy far exceeding that of interpreting only the limited clickthrough information. Furthermore, we showed that automatically learning to interpret user behavior results in substantially better performance than the human-designed ad-hoc clickthrough interpretation strategies. Another benefit of automatically learning to interpret user behavior is that such methods can adapt to changing conditions and changing user profiles. For example, the user behavior model on intranet search may be different from the web search behavior. Our general UserBehavior method would be able to adapt to these changes by automatically learning to map new behavior patterns to explicit relevance ratings. A natural application of our preference prediction models is to improve web search ranking [1]. In addition, our work has many potential applications including click spam detection, search abuse detection, personalization, and domain-specific ranking. For example, our automatically derived behavior models could be trained on examples of search abuse or click spam behavior instead of relevance labels. Alternatively, our models could be used directly to detect anomalies in user behavior--either due to abuse or to operational problems with the search engine. While our techniques perform well on average, our assumptions about clickthrough distributions (and learning the user behavior models) may not hold equally well for all queries. For example, queries with divergent access patterns (e.g., for ambiguous queries with multiple meanings) may result in behavior inconsistent with the model learned for all queries. Hence, clustering queries and learning different predictive models for each query type is a promising research direction. Query distributions also change over time, and it would be productive to investigate how that affects the predictive ability of these models. Furthermore, some predicted preferences may be more valuable than others, and we plan to investigate different metrics to capture the utility of the predicted preferences. As we showed in this paper, using the "wisdom of crowds" can give us an accurate interpretation of user interactions even in the inherently noisy web search setting. Our techniques allow us to automatically predict relevance preferences for web search results with accuracy greater than the previously published methods. The predicted relevance preferences can be used for automatic relevance evaluation and tuning, for deploying search in new settings, and ultimately for improving the overall web search experience.
Learning User Interaction Models for Predicting Web Search Result Preferences ABSTRACT Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected "noisy" user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods. 1. INTRODUCTION Relevance measurement is crucial to web search and to information retrieval in general. Traditionally, search relevance is measured by using human assessors to judge the relevance of querydocument pairs. However, explicit human ratings are expensive and difficult to obtain. At the same time, millions of people interact daily with web search engines, providing valuable implicit feedback through their interactions with the search results. If we could turn these interactions into relevance judgments, we could obtain large amounts of data for evaluating, maintaining, and improving information retrieval systems. However, most traditional IR work was performed over controlled test collections and carefully-selected query sets and tasks. Therefore, it is not clear whether these techniques will work for general real-world web search. A significant distinction is that web search is not controlled. Individual users may behave irrationally or maliciously, or may not even be real users; all of this affects the data that can be gathered. But the amount of the user interaction data is orders of magnitude larger than anything available in a non-web-search setting. Hence, it is preferable to automatically induce feedback interpretation strategies from large amounts of user interactions. Automatically learning to interpret user behavior would allow systems to adapt to changing conditions, changing user behavior patterns, and different search settings. We present techniques to automatically interpret the collective behavior of users interacting with a web search engine to predict user preferences for search results. Our contributions include: • A distributional model of user behavior, robust to noise within individual user sessions, that can recover relevance preferences from user interactions (Section 3). • Extensions of existing clickthrough strategies to include richer browsing and interaction features (Section 4). • A thorough evaluation of our user behavior models, as well as of previously published state-of-the-art techniques, over a large set of web search sessions (Sections 5 and 6). We discuss our results and outline future directions and various applications of this work in Section 7, which concludes the paper. 2. BACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval. A state-of-the-art search engine may use hundreds of features to describe a candidate page, employing sophisticated algorithms to rank pages based on these features. Current search engines are commonly tuned on human relevance judgments. Reducing the dependence on explicit human judgments by using implicit relevance feedback has been an active topic of research. Several research groups have evaluated the relationship between implicit measures and user interest. In these studies, both reading time and explicit ratings of interest are collected. Morita and Shinoda [14] studied the amount of time that users spent reading Usenet news articles and found that reading time could predict a user's interest levels. Konstan et al. [13] showed that reading time was a strong predictor of user interest in their GroupLens system. Oard and Kim [15] studied whether implicit feedback could substitute for explicit ratings in recommender systems. Goecks and Shavlik [8] approximated human labels by collecting a set of page activity measures while users browsed the World Wide Web. The authors hypothesized correlations between a high degree of page activity and a user's interest. While the results were promising, the sample size was small and the implicit measures were not tested against explicit judgments of user interest. Claypool et al. [6] studied how several implicit measures related to the interests of the user. Fox et al. [7] explored the relationship between implicit and explicit measures in Web search. They built an instrumented browser to collect data and then developed Bayesian models to relate implicit measures and explicit relevance judgments for both individual queries and search sessions. Joachims [9] developed valuable insights into the collection of implicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions. More recently, Joachims et al. [10] presented an empirical evaluation of interpreting clickthrough evidence. A more comprehensive overview of studies of implicit measures is described in Kelly and Teevan [12]. Unfortunately, the extent to which existing research applies to real-world web search is unclear. In this paper, we build on previous research to develop robust user behavior interpretation models for the real web search setting. 7. CONCLUSIONS AND FUTURE WORK Our paper is the first, to our knowledge, to interpret post-search user behavior to estimate user preferences in a real web search setting. We showed that our robust models result in higher prediction accuracy than previously published techniques. We introduced new, robust, probabilistic techniques for interpreting clickthrough evidence by aggregating across users and queries. Our methods result in clickthrough interpretation substantially more accurate than previously published results not specifically designed for web search scenarios. Our methods' predictions of relevance preferences are substantially more accurate than the current state-of-the-art search result ranking that does not consider user interactions. We also presented a general model for interpreting post-search user behavior that incorporates clickthrough, browsing, and query features. By considering the complete search experience after the initial query and click, we demonstrated prediction accuracy far exceeding that of interpreting only the limited clickthrough information. Furthermore, we showed that automatically learning to interpret user behavior results in substantially better performance than the human-designed ad-hoc clickthrough interpretation strategies. Another benefit of automatically learning to interpret user behavior is that such methods can adapt to changing conditions and changing user profiles. For example, the user behavior model on intranet search may be different from the web search behavior. Our general UserBehavior method would be able to adapt to these changes by automatically learning to map new behavior patterns to explicit relevance ratings. A natural application of our preference prediction models is to improve web search ranking [1]. In addition, our work has many potential applications including click spam detection, search abuse detection, personalization, and domain-specific ranking. For example, our automatically derived behavior models could be trained on examples of search abuse or click spam behavior instead of relevance labels. Alternatively, our models could be used directly to detect anomalies in user behavior--either due to abuse or to operational problems with the search engine. While our techniques perform well on average, our assumptions about clickthrough distributions (and learning the user behavior models) may not hold equally well for all queries. For example, queries with divergent access patterns (e.g., for ambiguous queries with multiple meanings) may result in behavior inconsistent with the model learned for all queries. Hence, clustering queries and learning different predictive models for each query type is a promising research direction. Query distributions also change over time, and it would be productive to investigate how that affects the predictive ability of these models. As we showed in this paper, using the "wisdom of crowds" can give us an accurate interpretation of user interactions even in the inherently noisy web search setting. Our techniques allow us to automatically predict relevance preferences for web search results with accuracy greater than the previously published methods. The predicted relevance preferences can be used for automatic relevance evaluation and tuning, for deploying search in new settings, and ultimately for improving the overall web search experience.
H-46
Broad Expertise Retrieval in Sparse Data Environments
Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings.
[ "broad expertis retriev", "spars data environ", "gener languag model", "languag model", "baselin expertis retriev method", "organiz structur", "intranet search", "expert colleagu", "trec enterpris track", "expert find task", "co-occurr", "topic and organiz structur", "bay' theorem", "baselin model", "expertis search", "expert find" ]
[ "P", "P", "P", "P", "P", "P", "M", "U", "U", "M", "U", "R", "U", "R", "M", "M" ]
Broad Expertise Retrieval in Sparse Data Environments Krisztian Balog ISLA, University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam, The Netherlands kbalog@science.uva.nl Toine Bogers ILK, Tilburg University P.O. Box 90153, 5000 LE Tilburg, The Netherlands A.M.Bogers@uvt.nl Leif Azzopardi Dept. of Computing Science University of Glasgow, Glasgow, G12 8QQ leif@dcs.gla.ac.uk Maarten de Rijke ISLA, University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam, The Netherlands mdr@science.uva.nl Antal van den Bosch ILK, Tilburg University P.O. Box 90153, 5000 LE Tilburg, The Netherlands Antal.vdnBosch@uvt.nl ABSTRACT Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings. Categories and Subject Descriptors H.3 [Information Storage and Retrieval]: H.3.1 Content Analysis and Indexing; H.3.3 Information Search and Retrieval; H.3.4 Systems and Software; H.4 [Information Systems Applications]: H.4.2 Types of Systems; H.4. m Miscellaneous General Terms Algorithms, Measurement, Performance, Experimentation 1. INTRODUCTION An organization``s intranet provides a means for exchanging information between employees and for facilitating employee collaborations. To efficiently and effectively achieve this, it is necessary to provide search facilities that enable employees not only to access documents, but also to identify expert colleagues. At the TREC Enterprise Track [22] the need to study and understand expertise retrieval has been recognized through the introduction of Expert Finding tasks. The goal of expert finding is to identify a list of people who are knowledgeable about a given topic. This task is usually addressed by uncovering associations between people and topics [10]; commonly, a co-occurrence of the name of a person with topics in the same context is assumed to be evidence of expertise. An alternative task, which using the same idea of people-topic associations, is expert profiling, where the task is to return a list of topics that a person is knowledgeable about [3]. The launch of the Expert Finding task at TREC has generated a lot of interest in expertise retrieval, with rapid progress being made in terms of modeling, algorithms, and evaluation aspects. However, nearly all of the expert finding or profiling work performed has been validated experimentally using the W3C collection [24] from the Enterprise Track. While this collection is currently the only publicly available test collection for expertise retrieval tasks, it only represents one type of intranet. With only one test collection it is not possible to generalize conclusions to other realistic settings. In this paper we focus on expertise retrieval in a realistic setting that differs from the W3C setting-one in which relatively small amounts of clean, multilingual data are available, that cover a broad range of expertise areas, as can be found on the intranets of universities and other knowledge-intensive organizations. Typically, this setting features several additional types of structure: topical structure (e.g., topic hierarchies as employed by the organization), organizational structure (faculty, department, ...), as well as multiple types of documents (research and course descriptions, publications, and academic homepages). This setting is quite different from the W3C setting in ways that might impact upon the performance of expertise retrieval tasks. We focus on a number of research questions in this paper: Does the relatively small amount of data available on an intranet affect the quality of the topic-person associations that lie at the heart of expertise retrieval algorithms? How do state-of-the-art algorithms developed on the W3C data set perform in the alternative scenario of the type described above? More generally, do the lessons from the Expert Finding task at TREC carry over to this setting? How does the inclusion or exclusion of different documents affect expertise retrieval tasks? In addition to, how can the topical and organizational structure be used for retrieval purposes? To answer our research questions, we first present a set of baseline approaches, based on generative language modeling, aimed at finding associations between topics and people. This allows us to formulate the expert finding and expert profiling tasks in a uniform way, and has the added benefit of allowing us to understand the relations between the two tasks. For our experimental evaluation, we introduce a new data set (the UvT Expert Collection) which is representative of the type of intranet that we described above. Our collection is based on publicly available data, crawled from the website of Tilburg University (UvT). This type of data is particularly interesting, since (1) it is clean, heterogeneous, structured, and focused, but comprises a limited number of documents; (2) contains information on the organizational hierarchy; (3) it is bilingual (English and Dutch); and (4) the list of expertise areas of an individual are provided by the employees themselves. Using the UvT Expert collection, we conduct two sets of experiments. The first is aimed at determining the effectiveness of baseline expertise finding and profiling methods in this new setting. A second group of experiments is aimed at extensions of the baseline methods that exploit characteristic features of the UvT Expert Collection; specifically, we propose and evaluate refined expert finding and profiling methods that incorporate topicality and organizational structure. Apart from the research questions and data set that we contribute, our main contributions are as follows. The baseline models developed for expertise finding perform well on the new data set. While on the W3C setting the expert finding task appears to be more difficult than profiling, for the UvT data the opposite is the case. We find that profiling on the UvT data set is considerably more difficult than on the W3C set, which we believe is due to the large (but realistic) number of topical areas that we used for profiling: about 1,500 for the UvT set, versus 50 in the W3C case. Taking the similarity between topics into account can significantly improve retrieval performance. The best performing similarity measures are content-based, therefore they can be applied on the W3C (and other) settings as well. Finally, we demonstrate that the organizational structure can be exploited in the form of a context model, improving MAP scores for certain models by up to 70%. The remainder of this paper is organized as follows. In the next section we review related work. Then, in Section 3 we provide detailed descriptions of the expertise retrieval tasks that we address in this paper: expert finding and expert profiling. In Section 4 we present our baseline models, of which the performance is then assessed in Section 6 using the UvT data set that we introduce in Section 5. Advanced models exploiting specific features of our data are presented in Section 7 and evaluated in Section 8. We formulate our conclusions in Section 9. 2. RELATED WORK Initial approaches to expertise finding often employed databases containing information on the skills and knowledge of each individual in the organization [11]. Most of these tools (usually called yellow pages or people-finding systems) rely on people to self-assess their skills against a predefined set of keywords. For updating profiles in these systems in an automatic fashion there is a need for intelligent technologies [5]. More recent approaches use specific document sets (such as email [6] or software [18]) to find expertise. In contrast with focusing on particular document types, there is also an increased interest in the development of systems that index and mine published intranet documents as sources of evidence for expertise. One such published approach is the P@noptic system [9], which builds a representation of each person by concatenating all documents associated with that person-this is similar to Model 1 of Balog et al. [4], who formalize and compare two methods. Balog et al.``s Model 1 directly models the knowledge of an expert from associated documents, while their Model 2 first locates documents on the topic and then finds the associated experts. In the reported experiments the second method performs significantly better when there are sufficiently many associated documents per candidate. Most systems that took part in the 2005 and 2006 editions of the Expert Finding task at TREC implemented (variations on) one of these two models; see [10, 20]. Macdonald and Ounis [16] propose a different approach for ranking candidate expertise with respect to a topic based on data fusion techniques, without using collectionspecific heuristics; they find that applying field-based weighting models improves the ranking of candidates. Petkova and Croft [19] propose yet another approach, based on a combination of the above Model 1 and 2, explicitly modeling topics. Turning to other expert retrieval tasks that can also be addressed using topic-people associations, Balog and de Rijke [3] addressed the task of determining topical expert profiles. While their methods proved to be efficient on the W3C corpus, they require an amount of data that may not be available in the typical knowledge-intensive organization. Balog and de Rijke [2] study the related task of finding experts that are similar to a small set of experts given as input. As an aside, creating a textual summary of a person shows some similarities to biography finding, which has received a considerable amount of attention recently; see e.g., [13]. We use generative language modeling to find associations between topics and people. In our modeling of expert finding and profiling we collect evidence for expertise from multiple sources, in a heterogeneous collection, and integrate it with the co-occurrence of candidates'' names and query terms-the language modeling setting allows us to do this in a transparent manner. Our modeling proceeds in two steps. In the first step, we consider three baseline models, two taken from [4] (the Models 1 and 2 mentioned above), and one a refined version of a model introduced in [3] (which we refer to as Model 3 below); this third model is also similar to the model described by Petkova and Croft [19]. The models we consider in our second round of experiments are mixture models similar to contextual language models [1] and to the expanded documents of Tao et al. [21]; however, the features that we use for definining our expansions-including topical structure and organizational structure-have not been used in this way before. 3. TASKS In the expertise retrieval scenario that we envisage, users seeking expertise within an organization have access to an interface that combines a search box (where they can search for experts or topics) with navigational structures (of experts and of topics) that allows them to click their way to an expert page (providing the profile of a person) or a topic page (providing a list of experts on the topic). To feed the above interface, we face two expertise retrieval tasks, expert finding and expert profiling, that we first define and then formalize using generative language models. In order to model either task, the probability of the query topic being associated to a candidate expert plays a key role in the final estimates for searching and profiling. By using language models, both the candidates and the query are characterized by distributions of terms in the vocabulary (used in the documents made available by the organization whose expertise retrieval needs we are addressing). 3.1 Expert finding Expert finding involves the task of finding the right person with the appropriate skills and knowledge: Who are the experts on topic X? . E.g., an employee wants to ascertain who worked on a particular project to find out why particular decisions were made without having to trawl through documentation (if there is any). Or, they may be in need a trained specialist for consultancy on a specific problem. Within an organization there are usually many possible candidates who could be experts for given topic. We can state this problem as follows: What is the probability of a candidate ca being an expert given the query topic q? That is, we determine p(ca|q), and rank candidates ca according to this probability. The candidates with the highest probability given the query are deemed the most likely experts for that topic. The challenge is how to estimate this probability accurately. Since the query is likely to consist of only a few terms to describe the expertise required, we should be able to obtain a more accurate estimate by invoking Bayes'' Theorem, and estimating: p(ca|q) = p(q|ca)p(ca) p(q) , (1) where p(ca) is the probability of a candidate and p(q) is the probability of a query. Since p(q) is a constant, it can be ignored for ranking purposes. Thus, the probability of a candidate ca being an expert given the query q is proportional to the probability of a query given the candidate p(q|ca), weighted by the a priori belief p(ca) that candidate ca is an expert. p(ca|q) ∝ p(q|ca)p(ca) (2) In this paper our main focus is on estimating the probability of a query given the candidate p(q|ca), because this probability captures the extent to which the candidate knows about the query topic. Whereas the candidate priors are generally assumed to be uniformand thus will not influence the ranking-it has been demonstrated that a sensible choice of priors may improve the performance [20]. 3.2 Expert profiling While the task of expert searching was concerned with finding experts given a particular topic, the task of expert profiling seeks to answer a related question: What topics does a candidate know about? Essentially, this turns the questions of expert finding around. The profiling of an individual candidate involves the identification of areas of skills and knowledge that they have expertise about and an evaluation of the level of proficiency in each of these areas. This is the candidate``s topical profile. Generally, topical profiles within organizations consist of tabular structures which explicitly catalogue the skills and knowledge of each individual in the organization. However, such practice is limited by the resources available for defining, creating, maintaining, and updating these profiles over time. By focusing on automatic methods which draw upon the available evidence within the document repositories of an organization, our aim is to reduce the human effort associated with the maintenance of topical profiles1 . A topical profile of a candidate, then, is defined as a vector where each element i of the vector corresponds to the candidate ca``s expertise on a given topic ki, (i.e., s(ca, ki)). Each topic ki defines a particular knowledge area or skill that the organization uses to define the candidate``s topical profile. Thus, it is assumed that a list of topics, {k1, ... , kn}, where n is the number of pre-defined topics, is given: profile(ca) = s(ca, k1), s(ca, k2), ... , s(ca, kn) . (3) 1 Context and evidence are needed to help users of expertise finding systems to decide whom to contact when seeking expertise in a particular area. Examples of such context are: Who does she work with? What are her contact details? Is she well-connected, just in case she is not able to help us herself? What is her role in the organization? Who is her superior? Collaborators, and affiliations, etc. are all part of the candidate``s social profile, and can serve as a background against which the system``s recommendations should be interpreted. In this paper we only address the problem of determining topical profiles, and leave social profiling to further work. We state the problem of quantifying the competence of a person on a certain knowledge area as follows: What is the probability of a knowledge area (ki) being part of the candidate``s (expertise) profile? where s(ca, ki) is defined by p(ki|ca). Our task, then, is to estimate p(ki|ca), which is equivalent to the problem of obtaining p(q|ca), where the topic ki is represented as a query topic q, i.e., a sequence of keywords representing the expertise required. Both the expert finding and profiling tasks rely on the accurate estimation of p(q|ca). The only difference derives from the prior probability that a person is an expert (p(ca)), which can be incorporated into the expert finding task. This prior does not apply to the profiling task since the candidate (individual) is fixed. 4. BASELINE MODELS In this section we describe our baseline models for estimating p(q|ca), i.e., associations between topics and people. Both expert finding and expert profiling boil down to this estimation. We employ three models for calculating this probability. 4.1 From topics to candidates Using Candidate Models: Model 1 Model 1 [4] defines the probability of a query given a candidate (p(q|ca)) using standard language modeling techniques, based on a multinomial unigram language model. For each candidate ca, a candidate language model θca is inferred such that the probability of a term given θca is nonzero for all terms, i.e., p(t|θca) > 0. From the candidate model the query is generated with the following probability: p(q|θca) = Y t∈q p(t|θca)n(t,q) , where each term t in the query q is sampled identically and independently, and n(t, q) is the number of times t occurs in q. The candidate language model is inferred as follows: (1) an empirical model p(t|ca) is computed; (2) it is smoothed with background probabilities. Using the associations between a candidate and a document, the probability p(t|ca) can be approximated by: p(t|ca) = X d p(t|d)p(d|ca), where p(d|ca) is the probability that candidate ca generates a supporting document d, and p(t|d) is the probability of a term t occurring in the document d. We use the maximum-likelihood estimate of a term, that is, the normalised frequency of the term t in document d. The strength of the association between document d and candidate ca expressed by p(d|ca) reflects the degree to which the candidates expertise is described using this document. The estimation of this probability is presented later, in Section 4.2. The candidate model is then constructed as a linear interpolation of p(t|ca) and the background model p(t) to ensure there are no zero probabilities, which results in the final estimation: p(q|θca) = (4) Y t∈q ( (1 − λ) X d p(t|d)p(d|ca) ! + λp(t) )n(t,q) . Model 1 amasses all the term information from all the documents associated with the candidate, and uses this to represent that candidate. This model is used to predict how likely a candidate would produce a query q. This can can be intuitively interpreted as the probability of this candidate talking about the query topic, where we assume that this is indicative of their expertise. Using Document Models: Model 2 Model 2 [4] takes a different approach. Here, the process is broken into two parts. Given a candidate ca, (1) a document that is associated with a candidate is selected with probability p(d|ca), and (2) from this document a query q is generated with probability p(q|d). Then the sum over all documents is taken to obtain p(q|ca), such that: p(q|ca) = X d p(q|d)p(d|ca). (5) The probability of a query given a document is estimated by inferring a document language model θd for each document d in a similar manner as the candidate model was inferred: p(t|θd) = (1 − λ)p(t|d) + λp(t), (6) where p(t|d) is the probability of the term in the document. The probability of a query given the document model is: p(q|θd) = Y t∈q p(t|θd)n(t,q) . The final estimate of p(q|ca) is obtained by substituting p(q|d) for p(q|θd) into Eq. 5 (see [4] for full details). Conceptually, Model 2 differs from Model 1 because the candidate is not directly modeled. Instead, the document acts like a hidden variable in the process which separates the query from the candidate. This process is akin to how a user may search for candidates with a standard search engine: initially by finding the documents which are relevant, and then seeing who is associated with that document. By examining a number of documents the user can obtain an idea of which candidates are more likely to discuss the topic q. Using Topic Models: Model 3 We introduce a third model, Model 3. Instead of attempting to model the query generation process via candidate or document models, we represent the query as a topic language model and directly estimate the probability of the candidate p(ca|q). This approach is similar to the model presented in [3, 19]. As with the previous models, a language model is inferred, but this time for the query. We adapt the work of Lavrenko and Croft [14] to estimate a topic model from the query. The procedure is as follows. Given a collection of documents and a query topic q, it is assumed that there exists an unknown topic model θk that assigns probabilities p(t|θk) to the term occurrences in the topic documents. Both the query and the documents are samples from θk (as opposed to the previous approaches, where a query is assumed to be sampled from a specific document or candidate model). The main task is to estimate p(t|θk), the probability of a term given the topic model. Since the query q is very sparse, and as there are no examples of documents on the topic, this distribution needs to be approximated. Lavrenko and Croft [14] suggest a reasonable way of obtaining such an approximation, by assuming that p(t|θk) can be approximated by the probability of term t given the query q. We can then estimate p(t|q) using the joint probability of observing the term t together with the query terms, q1, ... , qm, and dividing by the joint probability of the query terms: p(t|θk) ≈ p(t|q) = p(t, q1, ... , qm) p(q1, ... , qm) = p(t, q1, ... , qm) P t ∈T p(t , q1, ... , qm) , where p(q1, ... , qm) = P t ∈T p(t , q1, ... , qm), and T is the entire vocabulary of terms. In order to estimate the joint probability p(t, q1, ... , qm), we follow [14, 15] and assume t and q1, ... , qm are mutually independent, once we pick a source distribution from the set of underlying source distributions U. If we choose U to be a set of document models. then to construct this set, the query q would be issued against the collection, and the top n returned are assumed to be relevant to the topic, and thus treated as samples from the topic model. (Note that candidate models could be used instead.) With the document models forming U, the joint probability of term and query becomes: p(t, q1, ... , qm) = X d∈U p(d) ˘ p(t|θd) mY i=1 p(qi|θd) ¯ . (7) Here, p(d) denotes the prior distribution over the set U, which reflects the relevance of the document to the topic. We assume that p(d) is uniform across U. In order to rank candidates according to the topic model defined, we use the Kullback-Leibler divergence metric (KL, [8]) to measure the difference between the candidate models and the topic model: KL(θk||θca) = X t p(t|θk) log p(t|θk) p(t|θca) . (8) Candidates with a smaller divergence from the topic model are considered to be more likely experts on that topic. The candidate model θca is defined in Eq. 4. By using KL divergence instead of the probability of a candidate given the topic model p(ca|θk), we avoid normalization problems. 4.2 Document-candidate associations For our models we need to be able to estimate the probability p(d|ca), which expresses the extent to which a document d characterizes the candidate ca. In [4], two methods are presented for estimating this probability, based on the number of person names recognized in a document. However, in our (intranet) setting it is reasonable to assume that authors of documents can unambiguously be identified (e.g., as the author of an article, the teacher assigned to a course, the owner of a web page, etc.) Hence, we set p(d|ca) to be 1 if candidate ca is author of document d, otherwise the probability is 0. In Section 6 we describe how authorship can be determined on different types of documents within the collection. 5. THE UVT EXPERT COLLECTION The UvT Expert collection used in the experiments in this paper fits the scenario outlined in Section 3. The collection is based on the Webwijs (Webwise) system developed at Tilburg University (UvT) in the Netherlands. Webwijs (http://www.uvt.nl/ webwijs/) is a publicly accessible database of UvT employees who are involved in research or teaching; currently, Webwijs contains information about 1168 experts, each of whom has a page with contact information and, if made available by the expert, a research description and publications list. In addition, each expert can select expertise areas from a list of 1491 topics and is encouraged to suggest new topics that need to be approved by the Webwijs editor. Each topic has a separate page that shows all experts associated with that topic and, if available, a list of related topics. Webwijs is available in Dutch and English, and this bilinguality has been preserved in the collection. Every Dutch Webwijs page has an English translation. Not all Dutch topics have an English translation, but the reverse is true: the 981 English topics all have a Dutch equivalent. About 42% of the experts teach courses at Tilburg University; these courses were also crawled and included in the profile. In addition, about 27% of the experts link to their academic homepage from their Webwijs page. These home pages were crawled and added to the collection. (This means that if experts put the full-text versions of their publications on their academic homepage, these were also available for indexing.) We also obtained 1880 full-text versions of publications from the UvT institutional repository and Dutch English no. of experts 1168 1168 no. of experts with ≥ 1 topic 743 727 no. of topics 1491 981 no. of expert-topic pairs 4318 3251 avg. no. of topics/expert 5.8 5.9 max. no. of topics/expert (no. of experts) 60 (1) 35 (1) min. no. of topics/expert (no. of experts) 1 (74) 1 (106) avg. no. of experts/topic 2.9 3.3 max. no. of experts/topic (no. of topics) 30 (1) 30 (1) min. no. of experts/topic (no. of topics) 1 (615) 1 (346) no. of experts with HP 318 318 no. of experts with CD 318 318 avg. no. of CDs per teaching expert 3.5 3.5 no. of experts with RD 329 313 no. of experts with PUB 734 734 avg. no. of PUBs per expert 27.0 27.0 avg. no. of PUB citations per expert 25.2 25.2 avg. no. of full-text PUBs per expert 1.8 1.8 Table 2: Descriptive statistics of the Dutch and English versions of the UvT Expert collection. converted them to plain text. We ran the TextCat [23] language identifier to classify the language of the home pages and the fulltext publications. We restricted ourselves to pages where the classifier was confident about the language used on the page. This resulted in four document types: research descriptions (RD), course descriptions (CD), publications (PUB; full-text and citationonly versions), and academic homepages (HP). Everything was bundled into the UvT Expert collection which is available at http: //ilk.uvt.nl/uvt-expert-collection/. The UvT Expert collection was extracted from a different organizational setting than the W3C collection and differs from it in a number of ways. The UvT setting is one with relatively small amounts of multilingual data. Document-author associations are clear and the data is structured and clean. The collection covers a broad range of expertise areas, as one can typically find on intranets of universities and other knowledge-intensive institutes. Additionally, our university setting features several types of structure (topical and organizational), as well as multiple document types. Another important difference between the two data sets is that the expertise areas in the UvT Expert collection are self-selected instead of being based on group membership or assignments by others. Size is another dimension along which the W3C and UvT Expert collections differ: the latter is the smaller of the two. Also realistic are the large differences in the amount of information available for each expert. Utilizing Webwijs is voluntary; 425 Dutch experts did not select any topics at all. This leaves us with 743 Dutch and 727 English usable expert profiles. Table 2 provides descriptive statistics for the UvT Expert collection. Universities tend to have a hierarchical structure that goes from the faculty level, to departments, research groups, down to the individual researchers. In the UvT Expert collection we have information about the affiliations of researchers with faculties and institutes, providing us with a two-level organizational hierarchy. Tilburg University has 22 organizational units at the faculty level (including the university office and several research institutes) and 71 departments, which amounts to 3.2 departments per faculty. As to the topical hierarchy used by Webwijs, 131 of the 1491 topics are top nodes in the hierarchy. This hierarchy has an average topic chain length of 2.65 and a maximum length of 7 topics. 6. EVALUATION Below, we evaluate Section 4``s models for expert finding and profiling onthe UvT Expert collection. We detail our research questions and experimental setup, and then present our results. 6.1 Research Questions We address the following research questions. Both expert finding and profiling rely on the estimations of p(q|ca). The question is how the models compare on the different tasks, and in the setting of the UvT Expert collection. In [4], Model 2 outperformed Model 1 on the W3C collection. How do they compare on our data set? And how does Model 3 compare to Model 1? What about performance differences between the two languages in our test collection? 6.2 Experimental Setup The output of our models was evaluated against the self-assigned topic labels, which were treated as relevance judgements. Results were evaluated separately for English and Dutch. For English we only used topics for which the Dutch translation was available; for Dutch all topics were considered. The results were averaged for the queries in the intersection of relevance judgements and results; missing queries do not contribute a value of 0 to the scores. We use standard information retrieval measures, such as Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR). We also report the percentage of topics (%q) and candidates (%ca) covered, for the expert finding and profiling tasks, respectively. 6.3 Results Table 1 shows the performance of Model 1, 2, and 3 on the expert finding and profiling tasks. The rows of the table correspond to the various document types (RD, CD, PUB, and HP) and to their combinations. RD+CD+PUB+HP is equivalent to the full collection and will be referred as the BASELINE of our experiments. Looking at Table 1 we see that Model 2 performs the best across the board. However, when the data is clean and very focused (RD), Model 3 outperforms it in a number of cases. Model 1 has the best coverage of candidates (%ca) and topics (%q). The various document types differ in their characteristics and how they improve the finding and profiling tasks. Expert profiling benefits much from the clean data present in the RD and CD document types, while the publications contribute the most to the expert finding task. Adding the homepages does not prove to be particularly useful. When we compare the results across languages, we find that the coverage of English topics (%q) is higher than of the Dutch ones for expert finding. Apart from that, the scores fall in the same range for both languages. For the profiling task the coverage of the candidates (%ca) is very similar for both languages. However, the performance is substantially better for the English topics. While it is hard to compare scores across collections, we conclude with a brief comparison of the absolute scores in Table 1 to those reported in [3, 4] on the W3C test set (2005 edition). For expert finding the MAP scores for Model 2 reported here are about 50% higher than the corresponding figures in [4], while our MRR scores are slightly below those in [4]. For expert profiling, the differences are far more dramatic: the MAP scores for Model 2 reported here are around 50% below the scores in [3], while the (best) MRR scores are about the same as those in [3]. The cause for the latter differences seems to reside in the number of knowledge areas considered here-approx. 30 times more than in the W3C setting. 7. ADVANCED MODELS Now that we have developed and assessed basic language modeling techniques for expertise retrieval, we turn to refined models that exploit special features of our test collection. 7.1 Exploiting knowledge area similarity One way to improve the scoring of a query given a candidate is to consider what other requests the candidate would satisfy and use them as further evidence to support the original query, proportional Expert finding Expert profiling Document types Model 1 Model 2 Model 3 Model 1 Model 2 Model 3 %q MAP MRR %q MAP MRR %q MAP MRR %ca MAP MRR %ca MAP MRR %ca MAP MRR English RD 97.8 0.126 0.269 83.5 0.144 0.311 83.3 0.129 0.271 100 0.089 0.189 39.3 0.232 0.465 41.1 0.166 0.337 CD 97.8 0.118 0.227 91.7 0.123 0.248 91.7 0.118 0.226 32.8 0.188 0.381 32.4 0.195 0.385 32.7 0.203 0.370 PUB 97.8 0.200 0.330 98.0 0.216 0.372 98.0 0.145 0.257 78.9 0.167 0.364 74.5 0.212 0.442 78.9 0.135 0.299 HP 97.8 0.081 0.186 97.4 0.071 0.168 97.2 0.062 0.149 31.2 0.150 0.299 28.8 0.185 0.335 30.1 0.136 0.287 RD+CD 97.8 0.188 0.352 92.9 0.193 0.360 92.9 0.150 0.273 100 0.145 0.286 61.3 0.251 0.477 63.2 0.217 0.416 RD+CD+PUB 97.8 0.235 0.373 98.1 0.277 0.439 98.1 0.178 0.305 100 0.196 0.380 87.2 0.280 0.533 89.5 0.170 0.344 RD+CD+PUB+HP 97.8 0.237 0.372 98.6 0.280 0.441 98.5 0.166 0.293 100 0.199 0.387 88.7 0.281 0.525 90.9 0.169 0.329 Dutch RD 61.3 0.094 0.229 38.4 0.137 0.336 38.3 0.127 0.295 38.0 0.127 0.386 34.1 0.138 0.420 38.0 0.105 0.327 CD 61.3 0.107 0.212 49.7 0.128 0.256 49.7 0.136 0.261 32.5 0.151 0.389 31.8 0.158 0.396 32.5 0.170 0.380 PUB 61.3 0.193 0.319 59.5 0.218 0.368 59.4 0.173 0.291 78.8 0.126 0.364 76.0 0.150 0.424 78.8 0.103 0.294 HP 61.3 0.063 0.169 56.6 0.064 0.175 56.4 0.062 0.163 29.8 0.108 0.308 27.8 0.125 0.338 29.8 0.098 0.255 RD+CD 61.3 0.159 0.314 51.9 0.184 0.360 51.9 0.169 0.324 60.5 0.151 0.410 57.2 0.166 0.431 60.4 0.159 0.384 RD+CD+PUB 61.3 0.244 0.398 61.5 0.260 0.424 61.4 0.210 0.350 90.3 0.165 0.445 88.2 0.189 0.479 90.3 0.126 0.339 RD+CD+PUB+HP 61.3 0.249 0.401 62.6 0.265 0.436 62.6 0.195 0.344 91.9 0.164 0.426 90.1 0.195 0.488 91.9 0.125 0.328 Table 1: Performance of the models on the expert finding and profiling tasks, using different document types and their combinations. %q is the number of topics covered (applies to the expert finding task), %ca is the number of candidates covered (applies to the expert profiling task). The top and bottom blocks correspond to English and Dutch respectively. The best scores are in boldface. to how related the other requests are to the original query. This can be modeled by interpolating between the p(q|ca) and the further supporting evidence from all similar requests q , as follows: p (q|ca) = λp(q|ca) + (1 − λ) X q p(q|q )p(q |ca), (9) where p(q|q ) represents the similarity between the two topics q and q . To be able to work with similarity methods that are not necessarily probabilities, we set p(q|q ) = w(q,q ) γ , where γ is a normalizing constant, such that γ = P q w(q , q ). We consider four methods for calculating the similarity score between two topics. Three approaches are strictly content-based, and establish similarity by examining co-occurrence patterns of topics within the collection, while the last approach exploits the hierarchical structure of topical areas that may be present within an organization (see [7] for further examples of integrating word relationships into language models). The Kullback-Leibler (KL) divergence metric defined in Eq. 8 provides a measure of how different or similar two probability distributions are. A topic model is inferred for q and q using the method presented in Section 4.1 to describe the query across the entire vocabulary. Since a lower KL score means the queries are more similar, we let w(q, q ) = max(KL(θq||·) − KL(θq||θq )). Pointwise Mutual Information (PMI, [17]) is a measure of association used in information theory to determine the extent of independence between variables. The dependence between two queries is reflected by the SI(q, q ) score, where scores greater than zero indicate that it is likely that there is a dependence, which we take to mean that the queries are likely to be similar: SI(q, q ) = log p(q, q ) p(q)p(q ) (10) We estimate the probability of a topic p(q) using the number of documents relevant to query q within the collection. The joint probability p(q, q ) is estimated similarly, by using the concatenation of q and q as a query. To obtain p(q|q ), we then set w(q, q ) = SI(q, q ) when SI(q, q ) > 0 otherwise w(q, q ) = 0, because we are only interested in including queries that are similar. The log-likelihood statistic provides another measure of dependence, which is more reliable than the pointwise mutual information measure [17]. Let k1 be the number of co-occurrences of q and q , k2 the number of occurrences of q not co-occurring with q , n1 the total number of occurrences of q , and n2 the total number of topic tokens minus the number of occurrences of q . Then, let p1 = k1/n1, p2 = k2/n2, and p = (k1 + k2)/(n1 + n2), (q, q ) = 2( (p1, k1, n1) + (p2, k2, n2) − (p, k1, n1) − (p, k2, n2)), where (p, n, k) = k log p + (n − k) log(1 − p). The higher score indicate that queries are also likely to be similar, thus we set w(q, q ) = (q, q ). Finally, we also estimate the similarity of two topics based on their distance within the topic hierarchy. The topic hierarchy is viewed as a directed graph, and for all topic-pairs the shortest path SP(q, q ) is calculated. We set the similarity score to be the reciprocal of the shortest path: w(q, q ) = 1/SP(q, q ). 7.2 Contextual information Given the hierarchy of an organization, the units to which a person belong are regarded as a context so as to compensate for data sparseness. We model it as follows: p (q|ca) = 1 − P ou∈OU(ca) λou · p(q|ca) + P ou∈OU(ca) λou · p(q|ou), where OU(ca) is the set of organizational units of which candidate ca is a member of, and p(q|o) expresses the strength of the association between query q and the unit ou. The latter probability can be estimated using either of the three basic models, by simply replacing ca with ou in the corresponding equations. An organizational unit is associated with all the documents that its members have authored. That is, p(d|ou) = maxca∈ou p(d|ca). 7.3 A simple multilingual model For knowledge institutes in Europe, academic or otherwise, a multilingual (or at least bilingual) setting is typical. The following model builds on a kind of independence assumption: there is no spill-over of expertise/profiles across language boundaries. While a simplification, this is a sensible first approach. That is: p (q|ca) =P l∈L λl · p(ql|ca), where L is the set of languages used in the collection, ql is the translation of the query q to language l, and λl is a language specific smoothing parameter, such that P l∈L λl = 1. 8. ADVANCED MODELS: EVALUATION In this section we present an experimental evaluation of our advanced models. Expert finding Expert profiling Language Model 1 Model 2 Model 3 Model 1 Model 2 Model 3 %q MAP MRR %q MAP MRR %q MAP MRR %ca MAP MRR %ca MAP MRR %ca MAP MRR English only 97.8 0.237 0.372 98.6 0.280 0.441 98.5 0.166 0.293 100 0.199 0.387 88.7 0.281 0.525 90.9 0.169 0.329 Dutch only 61.3 0.249 0.401 62.6 0.265 0.436 62.6 0.195 0.344 91.9 0.164 0.426 90.1 0.195 0.488 91.9 0.125 0.328 Combination 99.4 0.297 0.444 99.7 0.324 0.491 99.7 0.223 0.388 100 0.241 0.445 92.1 0.313 0.564 93.2 0.224 0.411 Table 3: Performance of the combination of languages on the expert finding and profiling tasks (on candidates). Best scores for each model are in italic, absolute best scores for the expert finding and profiling tasks are in boldface. Method Model 1 Model 2 Model 3 MAP MRR MAP MRR MAP MRR English BASELINE 0.296 0.454 0.339 0.509 0.221 0.333 KLDIV 0.291 0.453 0.327 0.503 0.219 0.330 PMI 0.291 0.453 0.337 0.509 0.219 0.331 LL 0.319 0.490 0.360 0.524 0.233 0.368 HDIST 0.299 0.465 0.346 0.537 0.219 0.332 Dutch BASELINE 0.240 0.350 0.271 0.403 0.227 0.389 KLDIV 0.239 0.347 0.253 0.386 0.224 0.385 PMI 0.239 0.350 0.260 0.392 0.227 0.389 LL 0.255 0.372 0.281 0.425 0.231 0.389 HDIST 0.253 0.365 0.271 0.407 0.236 0.402 Method Model 1 Model 2 Model 3 MAP MRR MAP MRR MAP MRR English BASELINE 0.485 0.546 0.499 0.548 0.381 0.416 KLDIV 0.510 0.564 0.513 0.558 0.381 0.416 PMI 0.486 0.546 0.495 0.542 0.407 0.451 LL 0.558 0.589 0.586 0.617 0.408 0.453 HDIST 0.507 0.567 0.512 0.563 0.386 0.420 Dutch BASELINE 0.263 0.313 0.294 0.358 0.262 0.315 KLDIV 0.284 0.336 0.271 0.321 0.261 0.314 PMI 0.265 0.317 0.265 0.316 0.273 0.330 LL 0.312 0.351 0.330 0.377 0.284 0.331 HDIST 0.280 0.327 0.288 0.341 0.266 0.321 Table 4: Performance on the expert finding (top) and profiling (bottom) tasks, using knowledge area similarities. Runs were evaluated on the main topics set. Best scores are in boldface. 8.1 Research Questions Our questions follow the refinements presented in the preceding section: Does exploiting the knowledge area similarity improve effectiveness? Which of the various methods for capturing word relationships is most effective? Furthermore, is our way of bringing in contextual information useful? For which tasks? And finally, is our simple way of combining the monolingual scores sufficient for obtaining significant improvements? 8.2 Experimental setup Given that the self-assessments are also sparse in our collection, in order to be able to measure differences between the various models, we selected a subset of topics, and evaluated (some of the) runs only on this subset. This set is referred as main topics, and consists of topics that are located at the top level of the topical hierarchy. (A main topic has subtopics, but is not a subtopic of any other topic.) This main set consists of 132 Dutch and 119 English topics. The relevance judgements were restricted to the main topic set, but were not expanded with subtopics. 8.3 Exploiting knowledge area similarity Table 4 presents the results. The four methods used for estimating knowledge-area similarity are KL divergence (KLDIV), PointLang. Topics Model 1 Model 2 Model 3 MAP MRR MAP MRR MAP MRR Expert finding UK ALL 0.423 0.545 0.654 0.799 0.494 0.629 UK MAIN 0.500 0.621 0.704 0.834 0.587 0.699 NL ALL 0.439 0.560 0.672 0.826 0.480 0.630 NL MAIN 0.440 0.584 0.645 0.816 0.515 0.655 Expert profiling UK ALL 0.240 0.640 0.306 0.778 0.223 0.616 UK MAIN 0.523 0.677 0.519 0.648 0.461 0.587 NL ALL 0.203 0.716 0.254 0.770 0.183 0.627 NL MAIN 0.332 0.576 0.380 0.624 0.332 0.549 Table 5: Evaluating the context models on organizational units. wise mutual information (PMI), log-likelihood (LL), and distance within topic hierarchy (HDIST). We managed to improve upon the baseline in all cases, but the improvement is more noticeable for the profiling task. For both tasks, the LL method performed best. The content-based approaches performed consistently better than HDIST. 8.4 Contextual information A two level hierarchy of organizational units (faculties and institutes) is available in the UvT Expert collection. The unit a person belongs to is used as a context for that person. First, we evaluated the models of the organizational units, using all topics (ALL) and only the main topics (MAIN). An organizational unit is considered to be relevant for a given topic (or vice versa) if at least one member of the unit selected the given topic as an expertise area. Table 5 reports on the results. As far as expert finding goes, given a topic, the corresponding organizational unit can be identified with high precision. However, the expert profiling task shows a different picture: the scores are low, and the task seems hard. The explanation may be that general concepts (i.e., our main topics) may belong to several organizational units. Second, we performed another evaluation, where we combined the contextual models with the candidate models (to score candidates again). Table 6 reports on the results. We find a positive impact of the context models only for expert finding. Noticably, for expert finding (and Model 1), it improves over 50% (for English) and over 70% (for Dutch) on MAP. The poor performance on expert profiling may be due to the fact that context models alone did not perform very well on the profiling task to begin with. 8.5 Multilingual models In this subsection we evaluate the method for combining results across multiple languages that we described in Section 7.3. In our setting the set of languages consists of English and Dutch: L = {UK, NL}. The weights on these languages were set to be identical (λUK = λNL = 0.5). We performed experiments with various λ settings, but did not observe significant differences in performance. Table 3 reports on the multilingual results, where performance is evaluated on the full topic set. All three models significantly imLang. Method Model 1 Model 2 Model 3 MAP MRR MAP MRR MAP MRR Expert finding UK BL 0.296 0.454 0.339 0.509 0.221 0.333 UK CT 0.330 0.491 0.342 0.500 0.228 0.342 NL BL 0.240 0.350 0.271 0.403 0.227 0.389 NL CT 0.251 0.382 0.267 0.410 0.246 0.404 Expert profiling UK BL 0.485 0.546 0.499 0.548 0.381 0.416 UK CT 0.562 0.620 0.508 0.558 0.440 0.486 NL BL 0.263 0.313 0.294 0.358 0.262 0.315 NL CT 0.330 0.384 0.317 0.387 0.294 0.345 Table 6: Performance of the context models (CT) compared to the baseline (BL). Best scores are in boldface. proved over all measures for both tasks. The coverage of topics and candidates for the expert finding and profiling tasks, respectively, is close to 100% in all cases. The relative improvement of the precision scores ranges from 10% to 80%. These scores demonstrate that despite its simplicity, our method for combining results over multiple languages achieves substantial improvements over the baseline. 9. CONCLUSIONS In this paper we focused on expertise retrieval (expert finding and profiling) in a new setting of a typical knowledge-intensive organization in which the available data is of high quality, multilingual, and covering a broad range of expertise area. Typically, the amount of available data in such an organization (e.g., a university, a research institute, or a research lab) is limited when compared to the W3C collection that has mostly been used for the experimental evaluation of expertise retrieval so far. To examine expertise retrieval in this setting, we introduced (and released) the UvT Expert collection as a representative case of such knowledge intensive organizations. The new collection reflects the typical properties of knowledge-intensive institutes noted above and also includes several features which may are potentially useful for expertise retrieval, such as topical and organizational structure. We evaluated how current state-of-the-art models for expert finding and profiling performed in this new setting and then refined these models in order to try and exploit the different characteristics within the data environment (language, topicality, and organizational structure). We found that current models of expertise retrieval generalize well to this new environment; in addition we found that refining the models to account for the differences results in significant improvements, thus making up for problems caused by data sparseness issues. Future work includes setting up manual assessments of automatically generated profiles by the employees themselves, especially in cases where the employees have not provided a profile themselves. 10. ACKNOWLEDGMENTS Krisztian Balog was supported by the Netherlands Organisation for Scientific Research (NWO) under project number 220-80-001. Maarten de Rijke was also supported by NWO under project numbers 017.001.190, 220-80-001, 264-70-050, 354-20-005, 600.065.120, 612-13-001, 612.000.106, 612.066.302, 612.069.006, 640.001.501, 640.002.501, and by the E.U. IST programme of the 6th FP for RTD under project MultiMATCH contract IST-033104. The work of Toine Bogers and Antal van den Bosch was funded by the IOP-MMI-program of SenterNovem / The Dutch Ministry of Economic Affairs, as part of the `A Propos project. 11. REFERENCES [1] L. Azzopardi. Incorporating Context in the Language Modeling Framework for ad-hoc Information Retrieval. PhD thesis, University of Paisley, 2005. [2] K. Balog and M. de Rijke. Finding similar experts. In This volume, 2007. [3] K. Balog and M. de Rijke. Determining expert profiles (with an application to expert finding). In IJCAI ``07: Proc. 20th Intern. Joint Conf. on Artificial Intelligence, pages 2657-2662, 2007. [4] K. Balog, L. Azzopardi, and M. de Rijke. Formal models for expert finding in enterprise corpora. In SIGIR ``06: Proc. 29th annual intern. ACM SIGIR conf. on Research and development in information retrieval, pages 43-50, 2006. [5] I. Becerra-Fernandez. The role of artificial intelligence technologies in the implementation of people-finder knowledge management systems. In AAAI Workshop on Bringing Knowledge to Business Processes, March 2000. [6] C. S. Campbell, P. P. Maglio, A. Cozzi, and B. Dom. Expertise identification using email communications. In CIKM ``03: Proc. twelfth intern. conf. on Information and knowledge management, pages 528531, 2003. [7] G. Cao, J.-Y. Nie, and J. Bai. Integrating word relationships into language models. In SIGIR ``05: Proc. 28th annual intern. ACM SIGIR conf. on Research and development in information retrieval, pages 298-305, 2005. [8] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, 1991. [9] N. Craswell, D. Hawking, A. M. Vercoustre, and P. Wilkins. P@noptic expert: Searching for experts not just for documents. In Ausweb, 2001. [10] N. Craswell, A. de Vries, and I. Soboroff. Overview of the TREC2005 Enterprise Track. In The Fourteenth Text REtrieval Conf. Proc. (TREC 2005), 2006. [11] T. H. Davenport and L. Prusak. Working Knowledge: How Organizations Manage What They Know. Harvard Business School Press, Boston, MA, 1998. [12] T. Dunning. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61-74, 1993. [13] E. Filatova and J. Prager. Tell me what you do and I``ll tell you what you are: Learning occupation-related activities for biographies. In HLT/EMNLP, 2005. [14] V. Lavrenko and W. B. Croft. Relevance based language models. In SIGIR ``01: Proc. 24th annual intern. ACM SIGIR conf. on Research and development in information retrieval, pages 120-127, 2001. [15] V. Lavrenko, M. Choquette, and W. B. Croft. Cross-lingual relevance models. In SIGIR ``02: Proc. 25th annual intern. ACM SIGIR conf. on Research and development in information retrieval, pages 175-182, 2002. [16] C. Macdonald and I. Ounis. Voting for candidates: adapting data fusion techniques for an expert search task. In CIKM ``06: Proc. 15th ACM intern. conf. on Information and knowledge management, pages 387-396, 2006. [17] C. Manning and H. Sch¨utze. Foundations of Statistical Natural Language Processing. The MIT Press, 1999. [18] A. Mockus and J. D. Herbsleb. Expertise browser: a quantitative approach to identifying expertise. In ICSE ``02: Proc. 24th Intern. Conf. on Software Engineering, pages 503-512, 2002. [19] D. Petkova and W. B. Croft. Hierarchical language models for expert finding in enterprise corpora. In Proc. ICTAI 2006, pages 599-608, 2006. [20] I. Soboroff, A. de Vries, and N. Craswell. Overview of the TREC 2006 Enterprise Track. In TREC 2006 Working Notes, 2006. [21] T. Tao, X. Wang, Q. Mei, and C. Zhai. Language model information retrieval with document expansion. In HLT-NAACL 2006, 2006. [22] TREC. Enterprise track, 2005. URL: http://www.ins.cwi. nl/projects/trec-ent/wiki/. [23] G. van Noord. TextCat Language Guesser. URL: http://www. let.rug.nl/˜vannoord/TextCat/. [24] W3C. The W3C test collection, 2005. URL: http://research. microsoft.com/users/nickcr/w3c-summary.html.
Broad Expertise Retrieval in Sparse Data Environments ABSTRACT Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings. 1. INTRODUCTION An organization's intranet provides a means for exchanging information between employees and for facilitating employee collaborations. To efficiently and effectively achieve this, it is necessary to provide search facilities that enable employees not only to access documents, but also to identify expert colleagues. At the TREC Enterprise Track [22] the need to study and understand expertise retrieval has been recognized through the introduction of Expert Finding tasks. The goal of expert finding is to identify a list of people who are knowledgeable about a given topic. This task is usually addressed by uncovering associations between people and topics [10]; commonly, a co-occurrence of the name of a person with topics in the same context is assumed to be evidence of expertise. An alternative task, which using the same idea of people-topic associations, is expertprofiling, where the task is to return a list of topics that a person is knowledgeable about [3]. The launch of the Expert Finding task at TREC has generated a lot of interest in expertise retrieval, with rapid progress being made in terms of modeling, algorithms, and evaluation aspects. However, nearly all of the expert finding or profiling work performed has been validated experimentally using the W3C collection [24] from the Enterprise Track. While this collection is currently the only publicly available test collection for expertise retrieval tasks, it only represents one type of intranet. With only one test collection it is not possible to generalize conclusions to other realistic settings. In this paper we focus on expertise retrieval in a realistic setting that differs from the W3C setting--one in which relatively small amounts of clean, multilingual data are available, that cover a broad range of expertise areas, as can be found on the intranets of universities and other knowledge-intensive organizations. Typically, this setting features several additional types of structure: topical structure (e.g., topic hierarchies as employed by the organization), organizational structure (faculty, department, ...), as well as multiple types of documents (research and course descriptions, publications, and academic homepages). This setting is quite different from the W3C setting in ways that might impact upon the performance of expertise retrieval tasks. We focus on a number of research questions in this paper: Does the relatively small amount of data available on an intranet affect the quality of the topic-person associations that lie at the heart of expertise retrieval algorithms? How do state-of-the-art algorithms developed on the W3C data set perform in the alternative scenario of the type described above? More generally, do the lessons from the Expert Finding task at TREC carry over to this setting? How does the inclusion or exclusion of different documents affect expertise retrieval tasks? In addition to, how can the topical and organizational structure be used for retrieval purposes? To answer our research questions, we first present a set of baseline approaches, based on generative language modeling, aimed at finding associations between topics and people. This allows us to formulate the expert finding and expert profiling tasks in a uniform way, and has the added benefit of allowing us to understand the relations between the two tasks. For our experimental evaluation, we introduce a new data set (the UvT Expert Collection) which is representative of the type of intranet that we described above. Our collection is based on publicly available data, crawled from the website of Tilburg University (UvT). This type of data is particularly interesting, since (1) it is clean, heterogeneous, structured, and focused, but comprises a limited number of documents; (2) contains information on the organizational hierarchy; (3) it is bilingual (English and Dutch); and (4) the list of expertise areas of an individual are provided by the employees themselves. Using the UvT Expert collection, we conduct two sets of experiments. The first is aimed at determining the effectiveness of baseline expertise finding and profiling methods in this new setting. A second group of experiments is aimed at extensions of the baseline methods that exploit characteristic features of the UvT Expert Collection; specifically, we propose and evaluate refined expert finding and profiling methods that incorporate topicality and organizational structure. Apart from the research questions and data set that we contribute, our main contributions are as follows. The baseline models developed for expertise finding perform well on the new data set. While on the W3C setting the expert finding task appears to be more difficult than profiling, for the UvT data the opposite is the case. We find that profiling on the UvT data set is considerably more difficult than on the W3C set, which we believe is due to the large (but realistic) number of topical areas that we used for profiling: about 1,500 for the UvT set, versus 50 in the W3C case. Taking the similarity between topics into account can significantly improve retrieval performance. The best performing similarity measures are content-based, therefore they can be applied on the W3C (and other) settings as well. Finally, we demonstrate that the organizational structure can be exploited in the form of a context model, improving MAP scores for certain models by up to 70%. The remainder of this paper is organized as follows. In the next section we review related work. Then, in Section 3 we provide detailed descriptions of the expertise retrieval tasks that we address in this paper: expert finding and expert profiling. In Section 4 we present our baseline models, of which the performance is then assessed in Section 6 using the UvT data set that we introduce in Section 5. Advanced models exploiting specific features of our data are presented in Section 7 and evaluated in Section 8. We formulate our conclusions in Section 9. 2. RELATED WORK Initial approaches to expertise finding often employed databases containing information on the skills and knowledge of each individual in the organization [11]. Most of these tools (usually called yellow pages or people-finding systems) rely on people to self-assess their skills against a predefined set of keywords. For updating profiles in these systems in an automatic fashion there is a need for intelligent technologies [5]. More recent approaches use specific document sets (such as email [6] or software [18]) to find expertise. In contrast with focusing on particular document types, there is also an increased interest in the development of systems that index and mine published intranet documents as sources of evidence for expertise. One such published approach is the P@noptic system [9], which builds a representation of each person by concatenating all documents associated with that person--this is similar to Model 1 of Balog et al. [4], who formalize and compare two methods. Balog et al.'s Model 1 directly models the knowledge of an expert from associated documents, while their Model 2 first locates documents on the topic and then finds the associated experts. In the reported experiments the second method performs significantly better when there are sufficiently many associated documents per candidate. Most systems that took part in the 2005 and 2006 editions of the Expert Finding task at TREC implemented (variations on) one of these two models; see [10, 20]. Macdonald and Ounis [16] propose a different approach for ranking candidate expertise with respect to a topic based on data fusion techniques, without using collectionspecific heuristics; they find that applying field-based weighting models improves the ranking of candidates. Petkova and Croft [19] propose yet another approach, based on a combination of the above Model 1 and 2, explicitly modeling topics. Turning to other expert retrieval tasks that can also be addressed using topic--people associations, Balog and de Rijke [3] addressed the task of determining topical expert profiles. While their methods proved to be efficient on the W3C corpus, they require an amount of data that may not be available in the typical knowledge-intensive organization. Balog and de Rijke [2] study the related task of finding experts that are similar to a small set of experts given as input. As an aside, creating a textual "summary" of a person shows some similarities to biography finding, which has received a considerable amount of attention recently; see e.g., [13]. We use generative language modeling to find associations between topics and people. In our modeling of expert finding and profiling we collect evidence for expertise from multiple sources, in a heterogeneous collection, and integrate it with the co-occurrence of candidates' names and query terms--the language modeling setting allows us to do this in a transparent manner. Our modeling proceeds in two steps. In the first step, we consider three baseline models, two taken from [4] (the Models 1 and 2 mentioned above), and one a refined version of a model introduced in [3] (which we refer to as Model 3 below); this third model is also similar to the model described by Petkova and Croft [19]. The models we consider in our second round of experiments are mixture models similar to contextual language models [1] and to the expanded documents of Tao et al. [21]; however, the features that we use for definining our expansions--including topical structure and organizational structure--have not been used in this way before. 3. TASKS In the expertise retrieval scenario that we envisage, users seeking expertise within an organization have access to an interface that combines a search box (where they can search for experts or topics) with navigational structures (of experts and of topics) that allows them to click their way to an expert page (providing the profile of a person) or a topic page (providing a list of experts on the topic). To "feed" the above interface, we face two expertise retrieval tasks, expert finding and expert profiling, that we first define and then formalize using generative language models. In order to model either task, the probability of the query topic being associated to a candidate expert plays a key role in the final estimates for searching and profiling. By using language models, both the candidates and the query are characterized by distributions of terms in the vocabulary (used in the documents made available by the organization whose expertise retrieval needs we are addressing). 3.1 Expert finding Expert finding involves the task of finding the right person with the appropriate skills and knowledge: Who are the experts on topic X? . E.g., an employee wants to ascertain who worked on a particular project to find out why particular decisions were made without having to trawl through documentation (if there is any). Or, they may be in need a trained specialist for consultancy on a specific problem. Within an organization there are usually many possible candidates who could be experts for given topic. We can state this prob lem as follows: What is the probability of a candidate ca being an expert given the query topic q? That is, we determine p (caIq), and rank candidates ca according to this probability. The candidates with the highest probability given the query are deemed the most likely experts for that topic. The challenge is how to estimate this probability accurately. Since the query is likely to consist of only a few terms to describe the expertise required, we should be able to obtain a more accurate estimate by invoking Bayes' Theorem, and estimating: where p (ca) is the probability of a candidate and p (q) is the probability of a query. Since p (q) is a constant, it can be ignored for ranking purposes. Thus, the probability of a candidate ca being an expert given the query q is proportional to the probability of a query given the candidate p (qIca), weighted by the a priori belief p (ca) that candidate ca is an expert. In this paper our main focus is on estimating the probability of a query given the candidate p (qIca), because this probability captures the extent to which the candidate knows about the query topic. Whereas the candidate priors are generally assumed to be uniform--and thus will not influence the ranking--it has been demonstrated that a sensible choice of priors may improve the performance [20]. 3.2 Expert profiling While the task of expert searching was concerned with finding experts given a particular topic, the task of expert profiling seeks to answer a related question: What topics does a candidate know about? Essentially, this turns the questions of expert finding around. The profiling of an individual candidate involves the identification of areas of skills and knowledge that they have expertise about and an evaluation of the level of proficiency in each of these areas. This is the candidate's topical profile. Generally, topical profiles within organizations consist of tabular structures which explicitly catalogue the skills and knowledge of each individual in the organization. However, such practice is limited by the resources available for defining, creating, maintaining, and updating these profiles over time. By focusing on automatic methods which draw upon the available evidence within the document repositories of an organization, our aim is to reduce the human effort associated with the maintenance of topical profiles1. A topical profile of a candidate, then, is defined as a vector where each element i of the vector corresponds to the candidate ca's expertise on a given topic ki, (i.e., s (ca, ki)). Each topic ki defines a particular knowledge area or skill that the organization uses to define the candidate's topical profile. Thus, it is assumed that a list of topics, {k1,..., kn}, where n is the number of pre-defined topics, is given: 1Context and evidence are needed to help users of expertise finding systems to decide whom to contact when seeking expertise in a particular area. Examples of such context are: Who does she work with? What are her contact details? Is she well-connected, just in case she is not able to help us herself? What is her role in the organization? Who is her superior? Collaborators, and affiliations, etc. are all part of the candidate's social profile, and can serve as a background against which the system's recommendations should be interpreted. In this paper we only address the problem of determining topical profiles, and leave social profiling to further work. We state the problem of quantifying the competence of a person on a certain knowledge area as follows: What is the probability of a knowledge area (ki) being part of the candidate's (expertise) profile? where s (ca, ki) is defined by p (kiIca). Our task, then, is to estimate p (kiIca), which is equivalent to the problem of obtaining p (qIca), where the topic ki is represented as a query topic q, i.e., a sequence of keywords representing the expertise required. Both the expert finding and profiling tasks rely on the accurate estimation of p (qIca). The only difference derives from the prior probability that a person is an expert (p (ca)), which can be incorporated into the expert finding task. This prior does not apply to the profiling task since the candidate (individual) is fixed. 4. BASELINE MODELS In this section we describe our baseline models for estimating p (qIca), i.e., associations between topics and people. Both expert finding and expert profiling boil down to this estimation. We employ three models for calculating this probability. 4.1 From topics to candidates Using Candidate Models: Model 1 Model 1 [4] defines the probability of a query given a candidate (p (qIca)) using standard language modeling techniques, based on a multinomial unigram language model. For each candidate ca, a candidate language model θca is inferred such that the probability of a term given θca is nonzero for all terms, i.e., p (tIθca)> 0. From the candidate model the query is generated with the following probability: where each term t in the query q is sampled identically and independently, and n (t, q) is the number of times t occurs in q. The candidate language model is inferred as follows: (1) an empirical model p (tIca) is computed; (2) it is smoothed with background probabilities. Using the associations between a candidate and a document, the probability p (tIca) can be approximated by: where p (dIca) is the probability that candidate ca generates a supporting document d, and p (tId) is the probability of a term t occurring in the document d. We use the maximum-likelihood estimate of a term, that is, the normalised frequency of the term t in document d. The strength of the association between document d and candidate ca expressed by p (dIca) reflects the degree to which the candidates expertise is described using this document. The estimation of this probability is presented later, in Section 4.2. The candidate model is then constructed as a linear interpolation of p (tIca) and the background model p (t) to ensure there are no zero probabilities, which results in the final estimation: d Model 1 amasses all the term information from all the documents associated with the candidate, and uses this to represent that candidate. This model is used to predict how likely a candidate would produce a query q. This can can be intuitively interpreted as the probability of this candidate talking about the query topic, where we assume that this is indicative of their expertise. Using Document Models: Model 2 Model 2 [4] takes a different approach. Here, the process is broken into two parts. Given a candidate ca, (1) a document that is associated with a candidate is selected with probability p (dIca), and (2) from this document a query q is generated with probability p (qId). Then the sum over all documents is taken to obtain p (qIca), such that: d The probability of a query given a document is estimated by inferring a document language model θd for each document d in a similar manner as the candidate model was inferred: where p (tId) is the probability of the term in the document. The probability of a query given the document model is: The final estimate of p (qIca) is obtained by substituting p (qId) for p (qIθd) into Eq. 5 (see [4] for full details). Conceptually, Model 2 differs from Model 1 because the candidate is not directly modeled. Instead, the document acts like a "hidden" variable in the process which separates the query from the candidate. This process is akin to how a user may search for candidates with a standard search engine: initially by finding the documents which are relevant, and then seeing who is associated with that document. By examining a number of documents the user can obtain an idea of which candidates are more likely to discuss the topic q. Using Topic Models: Model 3 We introduce a third model, Model 3. Instead of attempting to model the query generation process via candidate or document models, we represent the query as a topic language model and directly estimate the probability of the candidate p (caIq). This approach is similar to the model presented in [3, 19]. As with the previous models, a language model is inferred, but this time for the query. We adapt the work of Lavrenko and Croft [14] to estimate a topic model from the query. The procedure is as follows. Given a collection of documents and a query topic q, it is assumed that there exists an unknown topic model θk that assigns probabilities p (tIθk) to the term occurrences in the topic documents. Both the query and the documents are samples from θk (as opposed to the previous approaches, where a query is assumed to be sampled from a specific document or candidate model). The main task is to estimate p (tIθk), the probability of a term given the topic model. Since the query q is very sparse, and as there are no examples of documents on the topic, this distribution needs to be approximated. Lavrenko and Croft [14] suggest a reasonable way of obtaining such an approximation, by assuming that p (tIθk) can be approximated by the probability of term t given the query q. We can then estimate p (tIq) using the joint probability of observing the term t together with the query terms, ql,..., qm, and dividing by the joint probability of the query terms: where p (ql,..., qm) = Et0ET p (t', ql,..., qm), and T is the entire vocabulary of terms. In order to estimate the joint probability p (t, ql,..., qm), we follow [14, 15] and assume t and ql,..., qm are mutually independent, once we pick a source distribution from the set of underlying source distributions U. If we choose U to be a set of document models. then to construct this set, the query q would be issued against the collection, and the top n returned are assumed to be relevant to the topic, and thus treated as samples from the topic model. (Note that candidate models could be used instead.) With the document models forming U, the joint probability of term and query becomes: Here, p (d) denotes the prior distribution over the set U, which reflects the relevance of the document to the topic. We assume that p (d) is uniform across U. In order to rank candidates according to the topic model defined, we use the Kullback-Leibler divergence metric (KL, [8]) to measure the difference between the candidate models and the topic model: Candidates with a smaller divergence from the topic model are considered to be more likely experts on that topic. The candidate model θca is defined in Eq. 4. By using KL divergence instead of the probability of a candidate given the topic model p (caIθk), we avoid normalization problems. 4.2 Document-candidate associations For our models we need to be able to estimate the probability p (dIca), which expresses the extent to which a document d characterizes the candidate ca. In [4], two methods are presented for estimating this probability, based on the number of person names recognized in a document. However, in our (intranet) setting it is reasonable to assume that authors of documents can unambiguously be identified (e.g., as the author of an article, the teacher assigned to a course, the owner of a web page, etc.) Hence, we set p (dIca) to be 1 if candidate ca is author of document d, otherwise the probability is 0. In Section 6 we describe how authorship can be determined on different types of documents within the collection. 5. THE UVT EXPERT COLLECTION The UvT Expert collection used in the experiments in this paper fits the scenario outlined in Section 3. The collection is based on the Webwijs ("Webwise") system developed at Tilburg University (UvT) in the Netherlands. Webwijs (http://www.uvt.nl/ webwijs /) is a publicly accessible database of UvT employees who are involved in research or teaching; currently, Webwijs contains information about 1168 experts, each of whom has a page with contact information and, if made available by the expert, a research description and publications list. In addition, each expert can select expertise areas from a list of 1491 topics and is encouraged to suggest new topics that need to be approved by the Webwijs editor. Each topic has a separate page that shows all experts associated with that topic and, if available, a list of related topics. Webwijs is available in Dutch and English, and this bilinguality has been preserved in the collection. Every Dutch Webwijs page has an English translation. Not all Dutch topics have an English translation, but the reverse is true: the 981 English topics all have a Dutch equivalent. About 42% of the experts teach courses at Tilburg University; these courses were also crawled and included in the profile. In addition, about 27% of the experts link to their academic homepage from their Webwijs page. These home pages were crawled and added to the collection. (This means that if experts put the full-text versions of their publications on their academic homepage, these were also available for indexing.) We also obtained 1880 full-text versions of publications from the UvT institutional repository and = Dutch English no. of experts 1168 1168 no. of experts with ≥ 1 topic 743 727 no. of topics 1491 981 no. of expert-topic pairs 4318 3251 avg. no. of topics/expert 5.8 5.9 max. no. of topics/expert (no. of experts) 60 (1) 35 (1) min. no. of topics/expert (no. of experts) 1 (74) 1 (106) avg. no. of experts/topic 2.9 3.3 max. no. of experts/topic (no. of topics) 30 (1) 30 (1) min. no. of experts/topic (no. of topics) 1 (615) 1 (346) no. of experts with HP 318 318 no. of experts with CD 318 318 avg. no. of CDs per teaching expert 3.5 3.5 no. of experts with RD 329 313 no. of experts with PUB 734 734 avg. no. of PUBs per expert 27.0 27.0 avg. no. of PUB citations per expert 25.2 25.2 avg. no. of full-text PUBs per expert 1.8 1.8 Table 2: Descriptive statistics of the Dutch and English versions of the UvT Expert collection. converted them to plain text. We ran the TextCat [23] language identifier to classify the language of the home pages and the fulltext publications. We restricted ourselves to pages where the classifier was confident about the language used on the page. This resulted in four document types: research descriptions (RD), course descriptions (CD), publications (PUB; full-text and citationonly versions), and academic homepages (HP). Everything was bundled into the UvT Expert collection which is available at http: / / ilk.uvt.nl / uvt-expert-collection /. The UvT Expert collection was extracted from a different organizational setting than the W3C collection and differs from it in a number of ways. The UvT setting is one with relatively small amounts of multilingual data. Document-author associations are clear and the data is structured and clean. The collection covers a broad range of expertise areas, as one can typically find on intranets of universities and other knowledge-intensive institutes. Additionally, our university setting features several types of structure (topical and organizational), as well as multiple document types. Another important difference between the two data sets is that the expertise areas in the UvT Expert collection are self-selected instead of being based on group membership or assignments by others. Size is another dimension along which the W3C and UvT Expert collections differ: the latter is the smaller of the two. Also realistic are the large differences in the amount of information available for each expert. Utilizing Webwijs is voluntary; 425 Dutch experts did not select any topics at all. This leaves us with 743 Dutch and 727 English usable expert profiles. Table 2 provides descriptive statistics for the UvT Expert collection. Universities tend to have a hierarchical structure that goes from the faculty level, to departments, research groups, down to the individual researchers. In the UvT Expert collection we have information about the affiliations of researchers with faculties and institutes, providing us with a two-level organizational hierarchy. Tilburg University has 22 organizational units at the faculty level (including the university office and several research institutes) and 71 departments, which amounts to 3.2 departments per faculty. As to the topical hierarchy used by Webwijs, 131 of the 1491 topics are top nodes in the hierarchy. This hierarchy has an average topic chain length of 2.65 and a maximum length of 7 topics. 6. EVALUATION Below, we evaluate Section 4's models for expert finding and profiling onthe UvT Expert collection. We detail our research questions and experimental setup, and then present our results. 6.1 Research Questions We address the following research questions. Both expert finding and profiling rely on the estimations of p (q | ca). The question is how the models compare on the different tasks, and in the setting of the UvT Expert collection. In [4], Model 2 outperformed Model 1 on the W3C collection. How do they compare on our data set? And how does Model 3 compare to Model 1? What about performance differences between the two languages in our test collection? 6.2 Experimental Setup The output of our models was evaluated against the self-assigned topic labels, which were treated as relevance judgements. Results were evaluated separately for English and Dutch. For English we only used topics for which the Dutch translation was available; for Dutch all topics were considered. The results were averaged for the queries in the intersection of relevance judgements and results; missing queries do not contribute a value of 0 to the scores. We use standard information retrieval measures, such as Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR). We also report the percentage of topics (% q) and candidates (% ca) covered, for the expert finding and profiling tasks, respectively. 6.3 Results Table 1 shows the performance of Model 1, 2, and 3 on the expert finding and profiling tasks. The rows of the table correspond to the various document types (RD, CD, PUB, and HP) and to their combinations. RD+CD+PUB+HP is equivalent to the full collection and will be referred as the BASELINE of our experiments. Looking at Table 1 we see that Model 2 performs the best across the board. However, when the data is clean and very focused (RD), Model 3 outperforms it in a number of cases. Model 1 has the best coverage of candidates (% ca) and topics (% q). The various document types differ in their characteristics and how they improve the finding and profiling tasks. Expert profiling benefits much from the clean data present in the RD and CD document types, while the publications contribute the most to the expert finding task. Adding the homepages does not prove to be particularly useful. When we compare the results across languages, we find that the coverage of English topics (% q) is higher than of the Dutch ones for expert finding. Apart from that, the scores fall in the same range for both languages. For the profiling task the coverage of the candidates (% ca) is very similar for both languages. However, the performance is substantially better for the English topics. While it is hard to compare scores across collections, we conclude with a brief comparison of the absolute scores in Table 1 to those reported in [3, 4] on the W3C test set (2005 edition). For expert finding the MAP scores for Model 2 reported here are about 50% higher than the corresponding figures in [4], while our MRR scores are slightly below those in [4]. For expert profiling, the differences are far more dramatic: the MAP scores for Model 2 reported here are around 50% below the scores in [3], while the (best) MRR scores are about the same as those in [3]. The cause for the latter differences seems to reside in the number of knowledge areas considered here--approx. 30 times more than in the W3C setting. 7. ADVANCED MODELS Now that we have developed and assessed basic language modeling techniques for expertise retrieval, we turn to refined models that exploit special features of our test collection. 7.1 Exploiting knowledge area similarity One way to improve the scoring of a query given a candidate is to consider what other requests the candidate would satisfy and use them as further evidence to support the original query, proportional Table 1: Performance of the models on the expert finding and profiling tasks, using different document types and their combinations. % q is the number of topics covered (applies to the expert finding task),% ca is the number of candidates covered (applies to the expert profiling task). The top and bottom blocks correspond to English and Dutch respectively. The best scores are in boldface. to how related the other requests are to the original query. This can be modeled by interpolating between the p (q | ca) and the further supporting evidence from all similar requests q', as follows: where p (q | q') represents the similarity between the two topics q and q'. To be able to work with similarity methods that are not necessarily probabilities, we set p (q | q') = w (9,9) γ, where - y is a normalizing constant, such that - y = E9 w (q", q'). We consider four methods for calculating the similarity score between two topics. Three approaches are strictly content-based, and establish similarity by examining co-occurrence patterns of topics within the collection, while the last approach exploits the hierarchical structure of topical areas that may be present within an organization (see [7] for further examples of integrating word relationships into language models). The Kullback-Leibler (KL) divergence metric defined in Eq. 8 provides a measure of how different or similar two probability distributions are. A topic model is inferred for q and q' using the method presented in Section 4.1 to describe the query across the entire vocabulary. Since a lower KL score means the queries are more similar, we let w (q, q') = max (KL (θ9 | | ·) − KL (θ9 | | θ9)). Pointwise Mutual Information (PMI, [17]) is a measure of association used in information theory to determine the extent of independence between variables. The dependence between two queries is reflected by the SI (q, q') score, where scores greater than zero indicate that it is likely that there is a dependence, which we take to mean that the queries are likely to be similar: We estimate the probability of a topic p (q) using the number of documents relevant to query q within the collection. The joint probability p (q, q') is estimated similarly, by using the concatenation of q and q' as a query. To obtain p (q | q'), we then set w (q, q') = SI (q, q') when SI (q, q')> 0 otherwise w (q, q') = 0, because we are only interested in including queries that are similar. The log-likelihood statistic provides another measure of dependence, which is more reliable than the pointwise mutual information measure [17]. Let k1 be the number of co-occurrences of q and q', k2 the number of occurrences of q not co-occurring with q', n1 the total number of occurrences of q', and n2 the total number of topic tokens minus the number of occurrences of q'. Then, let where $(p, n, k) = k log p + (n − k) log (1 − p). The higher it score indicate that queries are also likely to be similar, thus we set w (q, q') = U (q, q'). Finally, we also estimate the similarity of two topics based on their distance within the topic hierarchy. The topic hierarchy is viewed as a directed graph, and for all topic-pairs the shortest path SP (q, q') is calculated. We set the similarity score to be the reciprocal of the shortest path: w (q, q') = 1/SP (q, q'). 7.2 Contextual information Given the hierarchy of an organization, the units to which a person belong are regarded as a context so as to compensate for data sparseness. We model it as follows: where OU (ca) is the set of organizational units of which candidate ca is a member of, and p (q | o) expresses the strength of the association between query q and the unit ou. The latter probability can be estimated using either of the three basic models, by simply replacing ca with ou in the corresponding equations. An organizational unit is associated with all the documents that its members have authored. That is, p (d | ou) = maxcaEou p (d | ca). 7.3 A simple multilingual model For knowledge institutes in Europe, academic or otherwise, a multilingual (or at least bilingual) setting is typical. The following model builds on a kind of independence assumption: there is no spill-over of expertise/profiles across language boundaries. While a lEL yl · p (ql | ca), where L is the set of languages used in the collection, ql is the translation of the query q to language l, and yl is a language specific smoothing parameter, such that ElEL yl = 1. 8. ADVANCED MODELS: EVALUATION In this section we present an experimental evaluation of our advanced models. Table 3: Performance of the combination of languages on the expert finding and profiling tasks (on candidates). Best scores for each model are in italic, absolute best scores for the expert finding and profiling tasks are in boldface. Table 4: Performance on the expert finding (top) and profiling (bottom) tasks, using knowledge area similarities. Runs were evaluated on the main topics set. Best scores are in boldface. 8.1 Research Questions Our questions follow the refinements presented in the preceding section: Does exploiting the knowledge area similarity improve effectiveness? Which of the various methods for capturing word relationships is most effective? Furthermore, is our way of bringing in contextual information useful? For which tasks? And finally, is our simple way of combining the monolingual scores sufficient for obtaining significant improvements? 8.2 Experimental setup Given that the self-assessments are also sparse in our collection, in order to be able to measure differences between the various models, we selected a subset of topics, and evaluated (some of the) runs only on this subset. This set is referred as main topics, and consists of topics that are located at the top level of the topical hierarchy. (A main topic has subtopics, but is not a subtopic of any other topic.) This main set consists of 132 Dutch and 119 English topics. The relevance judgements were restricted to the main topic set, but were not expanded with subtopics. 8.3 Exploiting knowledge area similarity Table 4 presents the results. The four methods used for estimating knowledge-area similarity are KL divergence (KLDIV), Point Expert finding Table 5: Evaluating the context models on organizational units. wise mutual information (PMI), log-likelihood (LL), and distance within topic hierarchy (HDIST). We managed to improve upon the baseline in all cases, but the improvement is more noticeable for the profiling task. For both tasks, the LL method performed best. The content-based approaches performed consistently better than HDIST. 8.4 Contextual information A two level hierarchy of organizational units (faculties and institutes) is available in the UvT Expert collection. The unit a person belongs to is used as a context for that person. First, we evaluated the models of the organizational units, using all topics (ALL) and only the main topics (MAIN). An organizational unit is considered to be relevant for a given topic (or vice versa) if at least one member of the unit selected the given topic as an expertise area. Table 5 reports on the results. As far as expert finding goes, given a topic, the corresponding organizational unit can be identified with high precision. However, the expert profiling task shows a different picture: the scores are low, and the task seems hard. The explanation may be that general concepts (i.e., our main topics) may belong to several organizational units. Second, we performed another evaluation, where we combined the contextual models with the candidate models (to score candidates again). Table 6 reports on the results. We find a positive impact of the context models only for expert finding. Noticably, for expert finding (and Model 1), it improves over 50% (for English) and over 70% (for Dutch) on MAP. The poor performance on expert profiling may be due to the fact that context models alone did not perform very well on the profiling task to begin with. 8.5 Multilingual models In this subsection we evaluate the method for combining results across multiple languages that we described in Section 7.3. In our setting the set of languages consists of English and Dutch: L = {UK, NL}. The weights on these languages were set to be identical (λUK = λNL = 0.5). We performed experiments with various λ settings, but did not observe significant differences in performance. Table 3 reports on the multilingual results, where performance is evaluated on the full topic set. All three models significantly im Table 6: Performance of the context models (CT) compared to the baseline (BL). Best scores are in boldface. proved over all measures for both tasks. The coverage of topics and candidates for the expert finding and profiling tasks, respectively, is close to 100% in all cases. The relative improvement of the precision scores ranges from 10% to 80%. These scores demonstrate that despite its simplicity, our method for combining results over multiple languages achieves substantial improvements over the baseline. 9. CONCLUSIONS In this paper we focused on expertise retrieval (expert finding and profiling) in a new setting of a typical knowledge-intensive organization in which the available data is of high quality, multilingual, and covering a broad range of expertise area. Typically, the amount of available data in such an organization (e.g., a university, a research institute, or a research lab) is limited when compared to the W3C collection that has mostly been used for the experimental evaluation of expertise retrieval so far. To examine expertise retrieval in this setting, we introduced (and released) the UvT Expert collection as a representative case of such knowledge intensive organizations. The new collection reflects the typical properties of knowledge-intensive institutes noted above and also includes several features which may are potentially useful for expertise retrieval, such as topical and organizational structure. We evaluated how current state-of-the-art models for expert finding and profiling performed in this new setting and then refined these models in order to try and exploit the different characteristics within the data environment (language, topicality, and organizational structure). We found that current models of expertise retrieval generalize well to this new environment; in addition we found that refining the models to account for the differences results in significant improvements, thus making up for problems caused by data sparseness issues. Future work includes setting up manual assessments of automatically generated profiles by the employees themselves, especially in cases where the employees have not provided a profile themselves.
Broad Expertise Retrieval in Sparse Data Environments ABSTRACT Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings. 1. INTRODUCTION An organization's intranet provides a means for exchanging information between employees and for facilitating employee collaborations. To efficiently and effectively achieve this, it is necessary to provide search facilities that enable employees not only to access documents, but also to identify expert colleagues. At the TREC Enterprise Track [22] the need to study and understand expertise retrieval has been recognized through the introduction of Expert Finding tasks. The goal of expert finding is to identify a list of people who are knowledgeable about a given topic. This task is usually addressed by uncovering associations between people and topics [10]; commonly, a co-occurrence of the name of a person with topics in the same context is assumed to be evidence of expertise. An alternative task, which using the same idea of people-topic associations, is expertprofiling, where the task is to return a list of topics that a person is knowledgeable about [3]. The launch of the Expert Finding task at TREC has generated a lot of interest in expertise retrieval, with rapid progress being made in terms of modeling, algorithms, and evaluation aspects. However, nearly all of the expert finding or profiling work performed has been validated experimentally using the W3C collection [24] from the Enterprise Track. While this collection is currently the only publicly available test collection for expertise retrieval tasks, it only represents one type of intranet. With only one test collection it is not possible to generalize conclusions to other realistic settings. In this paper we focus on expertise retrieval in a realistic setting that differs from the W3C setting--one in which relatively small amounts of clean, multilingual data are available, that cover a broad range of expertise areas, as can be found on the intranets of universities and other knowledge-intensive organizations. Typically, this setting features several additional types of structure: topical structure (e.g., topic hierarchies as employed by the organization), organizational structure (faculty, department, ...), as well as multiple types of documents (research and course descriptions, publications, and academic homepages). This setting is quite different from the W3C setting in ways that might impact upon the performance of expertise retrieval tasks. We focus on a number of research questions in this paper: Does the relatively small amount of data available on an intranet affect the quality of the topic-person associations that lie at the heart of expertise retrieval algorithms? How do state-of-the-art algorithms developed on the W3C data set perform in the alternative scenario of the type described above? More generally, do the lessons from the Expert Finding task at TREC carry over to this setting? How does the inclusion or exclusion of different documents affect expertise retrieval tasks? In addition to, how can the topical and organizational structure be used for retrieval purposes? To answer our research questions, we first present a set of baseline approaches, based on generative language modeling, aimed at finding associations between topics and people. This allows us to formulate the expert finding and expert profiling tasks in a uniform way, and has the added benefit of allowing us to understand the relations between the two tasks. For our experimental evaluation, we introduce a new data set (the UvT Expert Collection) which is representative of the type of intranet that we described above. Our collection is based on publicly available data, crawled from the website of Tilburg University (UvT). This type of data is particularly interesting, since (1) it is clean, heterogeneous, structured, and focused, but comprises a limited number of documents; (2) contains information on the organizational hierarchy; (3) it is bilingual (English and Dutch); and (4) the list of expertise areas of an individual are provided by the employees themselves. Using the UvT Expert collection, we conduct two sets of experiments. The first is aimed at determining the effectiveness of baseline expertise finding and profiling methods in this new setting. A second group of experiments is aimed at extensions of the baseline methods that exploit characteristic features of the UvT Expert Collection; specifically, we propose and evaluate refined expert finding and profiling methods that incorporate topicality and organizational structure. Apart from the research questions and data set that we contribute, our main contributions are as follows. The baseline models developed for expertise finding perform well on the new data set. While on the W3C setting the expert finding task appears to be more difficult than profiling, for the UvT data the opposite is the case. We find that profiling on the UvT data set is considerably more difficult than on the W3C set, which we believe is due to the large (but realistic) number of topical areas that we used for profiling: about 1,500 for the UvT set, versus 50 in the W3C case. Taking the similarity between topics into account can significantly improve retrieval performance. The best performing similarity measures are content-based, therefore they can be applied on the W3C (and other) settings as well. Finally, we demonstrate that the organizational structure can be exploited in the form of a context model, improving MAP scores for certain models by up to 70%. The remainder of this paper is organized as follows. In the next section we review related work. Then, in Section 3 we provide detailed descriptions of the expertise retrieval tasks that we address in this paper: expert finding and expert profiling. In Section 4 we present our baseline models, of which the performance is then assessed in Section 6 using the UvT data set that we introduce in Section 5. Advanced models exploiting specific features of our data are presented in Section 7 and evaluated in Section 8. We formulate our conclusions in Section 9. 2. RELATED WORK Initial approaches to expertise finding often employed databases containing information on the skills and knowledge of each individual in the organization [11]. Most of these tools (usually called yellow pages or people-finding systems) rely on people to self-assess their skills against a predefined set of keywords. For updating profiles in these systems in an automatic fashion there is a need for intelligent technologies [5]. More recent approaches use specific document sets (such as email [6] or software [18]) to find expertise. In contrast with focusing on particular document types, there is also an increased interest in the development of systems that index and mine published intranet documents as sources of evidence for expertise. One such published approach is the P@noptic system [9], which builds a representation of each person by concatenating all documents associated with that person--this is similar to Model 1 of Balog et al. [4], who formalize and compare two methods. Balog et al.'s Model 1 directly models the knowledge of an expert from associated documents, while their Model 2 first locates documents on the topic and then finds the associated experts. In the reported experiments the second method performs significantly better when there are sufficiently many associated documents per candidate. Most systems that took part in the 2005 and 2006 editions of the Expert Finding task at TREC implemented (variations on) one of these two models; see [10, 20]. Macdonald and Ounis [16] propose a different approach for ranking candidate expertise with respect to a topic based on data fusion techniques, without using collectionspecific heuristics; they find that applying field-based weighting models improves the ranking of candidates. Petkova and Croft [19] propose yet another approach, based on a combination of the above Model 1 and 2, explicitly modeling topics. Turning to other expert retrieval tasks that can also be addressed using topic--people associations, Balog and de Rijke [3] addressed the task of determining topical expert profiles. While their methods proved to be efficient on the W3C corpus, they require an amount of data that may not be available in the typical knowledge-intensive organization. Balog and de Rijke [2] study the related task of finding experts that are similar to a small set of experts given as input. As an aside, creating a textual "summary" of a person shows some similarities to biography finding, which has received a considerable amount of attention recently; see e.g., [13]. We use generative language modeling to find associations between topics and people. In our modeling of expert finding and profiling we collect evidence for expertise from multiple sources, in a heterogeneous collection, and integrate it with the co-occurrence of candidates' names and query terms--the language modeling setting allows us to do this in a transparent manner. Our modeling proceeds in two steps. In the first step, we consider three baseline models, two taken from [4] (the Models 1 and 2 mentioned above), and one a refined version of a model introduced in [3] (which we refer to as Model 3 below); this third model is also similar to the model described by Petkova and Croft [19]. The models we consider in our second round of experiments are mixture models similar to contextual language models [1] and to the expanded documents of Tao et al. [21]; however, the features that we use for definining our expansions--including topical structure and organizational structure--have not been used in this way before. 3. TASKS 3.1 Expert finding 3.2 Expert profiling 4. BASELINE MODELS 4.1 From topics to candidates 4.2 Document-candidate associations 5. THE UVT EXPERT COLLECTION 6. EVALUATION 6.1 Research Questions 6.2 Experimental Setup 6.3 Results 7. ADVANCED MODELS 7.1 Exploiting knowledge area similarity 7.2 Contextual information 7.3 A simple multilingual model 8. ADVANCED MODELS: EVALUATION 8.1 Research Questions 8.2 Experimental setup 8.3 Exploiting knowledge area similarity Expert finding 8.4 Contextual information 8.5 Multilingual models 9. CONCLUSIONS In this paper we focused on expertise retrieval (expert finding and profiling) in a new setting of a typical knowledge-intensive organization in which the available data is of high quality, multilingual, and covering a broad range of expertise area. Typically, the amount of available data in such an organization (e.g., a university, a research institute, or a research lab) is limited when compared to the W3C collection that has mostly been used for the experimental evaluation of expertise retrieval so far. To examine expertise retrieval in this setting, we introduced (and released) the UvT Expert collection as a representative case of such knowledge intensive organizations. The new collection reflects the typical properties of knowledge-intensive institutes noted above and also includes several features which may are potentially useful for expertise retrieval, such as topical and organizational structure. We evaluated how current state-of-the-art models for expert finding and profiling performed in this new setting and then refined these models in order to try and exploit the different characteristics within the data environment (language, topicality, and organizational structure). We found that current models of expertise retrieval generalize well to this new environment; in addition we found that refining the models to account for the differences results in significant improvements, thus making up for problems caused by data sparseness issues. Future work includes setting up manual assessments of automatically generated profiles by the employees themselves, especially in cases where the employees have not provided a profile themselves.
Broad Expertise Retrieval in Sparse Data Environments ABSTRACT Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings. 1. INTRODUCTION An organization's intranet provides a means for exchanging information between employees and for facilitating employee collaborations. to provide search facilities that enable employees not only to access documents, but also to identify expert colleagues. At the TREC Enterprise Track [22] the need to study and understand expertise retrieval has been recognized through the introduction of Expert Finding tasks. The goal of expert finding is to identify a list of people who are knowledgeable about a given topic. This task is usually addressed by uncovering associations between people and topics [10]; commonly, a co-occurrence of the name of a person with topics in the same context is assumed to be evidence of expertise. An alternative task, which using the same idea of people-topic associations, is expertprofiling, where the task is to return a list of topics that a person is knowledgeable about [3]. The launch of the Expert Finding task at TREC has generated a lot of interest in expertise retrieval, with rapid progress being made in terms of modeling, algorithms, and evaluation aspects. However, nearly all of the expert finding or profiling work performed has been validated experimentally using the W3C collection [24] from the Enterprise Track. While this collection is currently the only publicly available test collection for expertise retrieval tasks, it only represents one type of intranet. With only one test collection it is not possible to generalize conclusions to other realistic settings. In this paper we focus on expertise retrieval in a realistic setting that differs from the W3C setting--one in which relatively small amounts of clean, multilingual data are available, that cover a broad range of expertise areas, as can be found on the intranets of universities and other knowledge-intensive organizations. This setting is quite different from the W3C setting in ways that might impact upon the performance of expertise retrieval tasks. We focus on a number of research questions in this paper: Does the relatively small amount of data available on an intranet affect the quality of the topic-person associations that lie at the heart of expertise retrieval algorithms? How do state-of-the-art algorithms developed on the W3C data set perform in the alternative scenario of the type described above? More generally, do the lessons from the Expert Finding task at TREC carry over to this setting? How does the inclusion or exclusion of different documents affect expertise retrieval tasks? In addition to, how can the topical and organizational structure be used for retrieval purposes? To answer our research questions, we first present a set of baseline approaches, based on generative language modeling, aimed at finding associations between topics and people. This allows us to formulate the expert finding and expert profiling tasks in a uniform way, and has the added benefit of allowing us to understand the relations between the two tasks. For our experimental evaluation, we introduce a new data set (the UvT Expert Collection) which is representative of the type of intranet that we described above. Our collection is based on publicly available data, crawled from the website of Tilburg University (UvT). Using the UvT Expert collection, we conduct two sets of experiments. The first is aimed at determining the effectiveness of baseline expertise finding and profiling methods in this new setting. A second group of experiments is aimed at extensions of the baseline methods that exploit characteristic features of the UvT Expert Collection; specifically, we propose and evaluate refined expert finding and profiling methods that incorporate topicality and organizational structure. Apart from the research questions and data set that we contribute, our main contributions are as follows. The baseline models developed for expertise finding perform well on the new data set. While on the W3C setting the expert finding task appears to be more difficult than profiling, for the UvT data the opposite is the case. Taking the similarity between topics into account can significantly improve retrieval performance. The best performing similarity measures are content-based, therefore they can be applied on the W3C (and other) settings as well. Finally, we demonstrate that the organizational structure can be exploited in the form of a context model, improving MAP scores for certain models by up to 70%. The remainder of this paper is organized as follows. In the next section we review related work. Then, in Section 3 we provide detailed descriptions of the expertise retrieval tasks that we address in this paper: expert finding and expert profiling. In Section 4 we present our baseline models, of which the performance is then assessed in Section 6 using the UvT data set that we introduce in Section 5. Advanced models exploiting specific features of our data are presented in Section 7 and evaluated in Section 8. We formulate our conclusions in Section 9. 2. RELATED WORK Initial approaches to expertise finding often employed databases containing information on the skills and knowledge of each individual in the organization [11]. More recent approaches use specific document sets (such as email [6] or software [18]) to find expertise. In contrast with focusing on particular document types, there is also an increased interest in the development of systems that index and mine published intranet documents as sources of evidence for expertise. Balog et al.'s Model 1 directly models the knowledge of an expert from associated documents, while their Model 2 first locates documents on the topic and then finds the associated experts. In the reported experiments the second method performs significantly better when there are sufficiently many associated documents per candidate. Most systems that took part in the 2005 and 2006 editions of the Expert Finding task at TREC implemented (variations on) one of these two models; see [10, 20]. Macdonald and Ounis [16] propose a different approach for ranking candidate expertise with respect to a topic based on data fusion techniques, without using collectionspecific heuristics; they find that applying field-based weighting models improves the ranking of candidates. Petkova and Croft [19] propose yet another approach, based on a combination of the above Model 1 and 2, explicitly modeling topics. Turning to other expert retrieval tasks that can also be addressed using topic--people associations, Balog and de Rijke [3] addressed the task of determining topical expert profiles. While their methods proved to be efficient on the W3C corpus, they require an amount of data that may not be available in the typical knowledge-intensive organization. Balog and de Rijke [2] study the related task of finding experts that are similar to a small set of experts given as input. We use generative language modeling to find associations between topics and people. In our modeling of expert finding and profiling we collect evidence for expertise from multiple sources, in a heterogeneous collection, and integrate it with the co-occurrence of candidates' names and query terms--the language modeling setting allows us to do this in a transparent manner. Our modeling proceeds in two steps. 9. CONCLUSIONS In this paper we focused on expertise retrieval (expert finding and profiling) in a new setting of a typical knowledge-intensive organization in which the available data is of high quality, multilingual, and covering a broad range of expertise area. Typically, the amount of available data in such an organization (e.g., a university, a research institute, or a research lab) is limited when compared to the W3C collection that has mostly been used for the experimental evaluation of expertise retrieval so far. To examine expertise retrieval in this setting, we introduced (and released) the UvT Expert collection as a representative case of such knowledge intensive organizations. The new collection reflects the typical properties of knowledge-intensive institutes noted above and also includes several features which may are potentially useful for expertise retrieval, such as topical and organizational structure. We evaluated how current state-of-the-art models for expert finding and profiling performed in this new setting and then refined these models in order to try and exploit the different characteristics within the data environment (language, topicality, and organizational structure). We found that current models of expertise retrieval generalize well to this new environment; in addition we found that refining the models to account for the differences results in significant improvements, thus making up for problems caused by data sparseness issues. Future work includes setting up manual assessments of automatically generated profiles by the employees themselves, especially in cases where the employees have not provided a profile themselves.
H-52
Vocabulary Independent Spoken Term Detection
We are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings. Today, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index. However, query terms that are not part of the recognizer's vocabulary cannot be retrieved, and the recall of the search is affected. In addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically. Such phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts. We present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts. A speech recognizer generates word confusion networks and phonetic lattices. The transcripts are indexed for query processing and ranking purpose. The value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1].
[ "vocabulari", "spoken term detect", "speech recogn", "phonet transcript", "vocabulari independ system", "speech data retriev", "index timestamp", "word index", "phonet index", "index merg", "oov search", "automat speech recognit", "speech retriev", "speak term detect", "out-of-vocabulari" ]
[ "P", "P", "P", "P", "P", "R", "M", "R", "R", "M", "M", "M", "R", "M", "U" ]
Vocabulary Independent Spoken Term Detection Jonathan Mamou IBM Haifa Research Labs Haifa 31905, Israel mamou@il.ibm.com Bhuvana Ramabhadran, Olivier Siohan IBM T. J. Watson Research Center Yorktown Heights, N.Y. 10598, USA {bhuvana,siohan}@us. ibm.com ABSTRACT We are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings. Today, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index. However, query terms that are not part of the recognizer``s vocabulary cannot be retrieved, and the recall of the search is affected. In addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically. Such phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts. We present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts. A speech recognizer generates word confusion networks and phonetic lattices. The transcripts are indexed for query processing and ranking purpose. The value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1]. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms Algorithms 1. INTRODUCTION The rapidly increasing amount of spoken data calls for solutions to index and search this data. The classical approach consists of converting the speech to word transcripts using a large vocabulary continuous speech recognition (LVCSR) tool. In the past decade, most of the research efforts on spoken data retrieval have focused on extending classical IR techniques to word transcripts. Some of these works have been done in the framework of the NIST TREC Spoken Document Retrieval tracks and are described by Garofolo et al. [12]. These tracks focused on retrieval from a corpus of broadcast news stories spoken by professionals. One of the conclusions of those tracks was that the effectiveness of retrieval mostly depends on the accuracy of the transcripts. While the accuracy of automatic speech recognition (ASR) systems depends on the scenario and environment, state-of-the-art systems achieved better than 90% accuracy in transcription of such data. In 2000, Garofolo et al. concluded that Spoken document retrieval is a solved problem [12]. However, a significant drawback of such approaches is that search on queries containing out-of-vocabulary (OOV) terms will not return any results. OOV terms are missing words from the ASR system vocabulary and are replaced in the output transcript by alternatives that are probable, given the recognition acoustic model and the language model. It has been experimentally observed that over 10% of user queries can contain OOV terms [16], as queries often relate to named entities that typically have a poor coverage in the ASR vocabulary. The effects of OOV query terms in spoken data retrieval are discussed by Woodland et al. [28]. In many applications the OOV rate may get worse over time unless the recognizer``s vocabulary is periodically updated. Another approach consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones. The retrieval is based on searching the sequence of phones representing the query in the phonetic transcripts. The main drawback of this approach is the inherent high error rate of the transcripts. Therefore, such approach cannot be an alternative to word transcripts, especially for in-vocabulary (IV) query terms that are part of the vocabulary of the ASR system. A solution would be to combine the two different approaches presented above: we index both word transcripts and phonetic transcripts; during query processing, the information is retrieved from the word index for IV terms and from the phonetic index for OOV terms. We would like to be able to process also hybrid queries, i.e, queries that include both IV and OOV terms. Consequently, we need to merge pieces of information retrieved from word index and phonetic index. Proximity information on the occurrences of the query terms is required for phrase search and for proximity-based ranking. In classical IR, the index stores for each occurrence of a term, its offset. Therefore, we cannot merge posting lists retrieved by phonetic index with those retrieved by word index since the offset of the occurrences retrieved from the two different indices are not comparable. The only element of comparison between phonetic and word transcripts are the timestamps. No previous work combining word and phonetic approach has been done on phrase search. We present a novel scheme for information retrieval that consists of storing, during the indexing process, for each unit of indexing (phone or word) its timestamp. We search queries by merging the information retrieved from the two different indices, word index and phonetic index, according to the timestamps of the query terms. We analyze the retrieval effectiveness of this approach on the NIST Spoken Term Detection 2006 evaluation data [1]. The paper is organized as follows. We describe the audio processing in Section 2. The indexing and retrieval methods are presented in section 3. Experimental setup and results are given in Section 4. In Section 5, we give an overview of related work. Finally, we conclude in Section 6. 2. AUTOMATIC SPEECH RECOGNITION SYSTEM We use an ASR system for transcribing speech data. It works in speaker-independent mode. For best recognition results, a speaker-independent acoustic model and a language model are trained in advance on data with similar characteristics. Typically, ASR generates lattices that can be considered as directed acyclic graphs. Each vertex in a lattice is associated with a timestamp and each edge (u, v) is labeled with a word or phone hypothesis and its prior probability, which is the probability of the signal delimited by the timestamps of the vertices u and v, given the hypothesis. The 1-best path transcript is obtained from the lattice using dynamic programming techniques. Mangu et al. [18] and Hakkani-Tur et al. [13] propose a compact representation of a word lattice called word confusion network (WCN). Each edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal. One of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice. As explained in [13], the three main steps for building a WCN from a word lattice are as follows: 1. Compute the posterior probabilities for all edges in the word lattice. 2. Extract a path from the word lattice (which can be the 1-best, the longest or any random path), and call it the pivot path of the alignment. 3. Traverse the word lattice, and align all the transitions with the pivot, merging the transitions that correspond to the same word (or label) and occur in the same time interval by summing their posterior probabilities. The 1-best path of a WCN is obtained from the path containing the best hypotheses. As stated in [18], although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice. Typical structures of a lattice and a WCN are given in Figure 1. Figure 1: Typical structures of a lattice and a WCN. 3. RETRIEVAL MODEL The main problem with retrieving information from spoken data is the low accuracy of the transcription particularly on terms of interest such as named entities and content words. Generally, the accuracy of a word transcript is characterized by its word error rate (WER). There are three kinds of errors that can occur in a transcript: substitution of a term that is part of the speech by another term, deletion of a spoken term that is part of the speech and insertion of a term that is not part of the speech. Substitutions and deletions reflect the fact that an occurrence of a term in the speech signal is not recognized. These misses reduce the recall of the search. Substitutions and insertions reflect the fact that a term which is not part of the speech signal appears in the transcript. These misses reduce the precision of the search. Search recall can be enhanced by expanding the transcript with extra words. These words can be taken from the other alternatives provided by the WCN; these alternatives may have been spoken but were not the top choice of the ASR. Such an expansion tends to correct the substitutions and the deletions and consequently, might improve recall but will probably reduce precision. Using an appropriate ranking model, we can avoid the decrease in precision. Mamou et al. have presented in [17] the enhancement in the recall and the MAP by searching on WCN instead of considering only the 1-best path word transcript in the context of spoken document retrieval. We have adapted this model of IV search to term detection. In word transcripts, OOV terms are deleted or substituted. Therefore, the usage of phonetic transcripts is more desirable. However, due to their low accuracy, we have preferred to use only the 1-best path extracted from the phonetic lattices. We will show that the usage of phonetic transcripts tends to improve the recall without affecting the precision too much, using an appropriate ranking. 3.1 Spoken document detection task As stated in the STD 2006 evaluation plan [2], the task consists in finding all the exact matches of a specific query in a given corpus of speech data. A query is a phrase containing several words. The queries are text and not speech. Note that this task is different from the more classical task of spoken document retrieval. Manual transcripts of the speech are not provided but are used by the evaluators to find true occurrences. By definition, true occurrences of a query are found automatically by searching the manual transcripts using the following rule: the gap between adjacent words in a query must be less than 0.5 seconds in the corresponding speech. For evaluating the results, each system output occurrence is judged as correct or not according to whether it is close in time to a true occurrence of the query retrieved from manual transcripts; it is judged as correct if the midpoint of the system output occurrence is less than or equal to 0.5 seconds from the time span of a true occurrence of the query. 3.2 Indexing We have used the same indexing process for WCN and phonetic transcripts. Each occurrence of a unit of indexing (word or phone) u in a transcript D is indexed with the following information: • the begin time t of the occurrence of u, • the duration d of the occurrence of u. In addition, for WCN indexing, we store • the confidence level of the occurrence of u at the time t that is evaluated by its posterior probability Pr(u|t, D), • the rank of the occurrence of u among the other hypotheses beginning at the same time t, rank(u|t, D). Note that since the task is to find exact matches of the phrase queries, we have not filtered stopwords and the corpus is not stemmed before indexing. 3.3 Search In the following, we present our approach for accomplishing the STD task using the indices described above. The terms are extracted from the query. The vocabulary of the ASR system building word transcripts is given. Terms that are part of this vocabulary are IV terms; the other terms are OOV. For an IV query term, the posting list is extracted from the word index. For an OOV query term, the term is converted to a sequence of phones using a joint maximum entropy N-gram model [10]. For example, the term prosody is converted to the sequence of phones (p, r, aa, z, ih, d, iy). The posting list of each phone is extracted from the phonetic index. The next step consists of merging the different posting lists according to the timestamp of the occurrences in order to create results matching the query. First, we check that the words and phones appear in the right order according to their begin times. Second, we check that the gap in time between adjacent words and phones is reasonable. Conforming to the requirements of the STD evaluation, the distance in time between two adjacent query terms must be less than 0.5 seconds. For OOV search, we check that the distance in time between two adjacent phones of a query term is less that 0.2 seconds; this value has been determined empirically. In such a way, we can reduce the effect of insertion errors since we allow insertions between the adjacent words and phones. Our query processing does not allow substitutions and deletions. Example: Let us consider the phrase query prosody research. The term prosody is OOV and the term research is IV. The term prosody is converted to the sequence of phones (p, r, aa, z, ih, d, iy). The posting list of each phone is extracted from the phonetic index. We merge the posting lists of the phones such that the sequence of phones appears in the right order and the gap in time between the pairs of phones (p, r), (r, aa), (aa, z), (z, ih), (ih, d), (d, iy) is less than 0.2 seconds. We obtain occurrences of the term prosody. The posting list of research is extracted from the word index and we merge it with the occurrences found for prosody such that they appear in the right order and the distance in time between prosody and research is less than 0.5 seconds. Note that our indexing model allows to search for different types of queries: 1. queries containing only IV terms using the word index. 2. queries containing only OOV terms using the phonetic index. 3. keyword queries containing both IV and OOV terms using the word index for IV terms and the phonetic index for OOV terms; for query processing, the different sets of matches are unified if the query terms have OR semantics and intersected if the query terms have AND semantics. 4. phrase queries containing both IV and OOV terms; for query processing, the posting lists of the IV terms retrieved from the word index are merged with the posting lists of the OOV terms retrieved from the phonetic index. The merging is possible since we have stored the timestamps for each unit of indexing (word and phone) in both indices. The STD evaluation has focused on the fourth query type. It is the hardest task since we need to combine posting lists retrieved from phonetic and word indices. 3.4 Ranking Since IV terms and OOV terms are retrieved from two different indices, we propose two different functions for scoring an occurrence of a term; afterward, an aggregate score is assigned to the query based on the scores of the query terms. Because the task is term detection, we do not use a document frequency criterion for ranking the occurrences. Let us consider a query Q = (k0, ..., kn), associated with a boosting vector B = (B1, ..., Bj). This vector associates a boosting factor to each rank of the different hypotheses; the boosting factors are normalized between 0 and 1. If the rank r is larger than j, we assume Br = 0. 3.4.1 In vocabulary term ranking For IV term ranking, we extend the work of Mamou et al. [17] on spoken document retrieval to term detection. We use the information provided by the word index. We define the score score(k, t, D) of a keyword k occurring at a time t in the transcript D, by the following formula: score(k, t, D) = Brank(k|t,D) × Pr(k|t, D) Note that 0 ≤ score(k, t, D) ≤ 1. 3.4.2 Out of vocabulary term ranking For OOV term ranking, we use the information provided by the phonetic index. We give a higher rank to occurrences of OOV terms that contain phones close (in time) to each other. We define a scoring function that is related to the average gap in time between the different phones. Let us consider a keyword k converted to the sequence of phones (pk 0 , ..., pk l ). We define the normalized score score(k, tk 0 , D) of a keyword k = (pk 0 , ..., pk l ), where each pk i occurs at time tk i with a duration of dk i in the transcript D, by the following formula: score(k, tk 0 , D) = 1 − l i=1 5 × (tk i − (tk i−1 + dk i−1)) l Note that according to what we have ex-plained in Section 3.3, we have ∀1 ≤ i ≤ l, 0 < tk i − (tk i−1 + dk i−1) < 0.2 sec, 0 < 5 × (tk i − (tk i−1 + dk i−1)) < 1, and consequently, 0 < score(k, tk 0 , D) ≤ 1. The duration of the keyword occurrence is tk l − tk 0 + dk l . Example: let us consider the sequence (p, r, aa, z, ih, d, iy) and two different occurrences of the sequence. For each phone, we give the begin time and the duration in second. Occurrence 1: (p, 0.25, 0.01), (r, 0.36, 0.01), (aa, 0.37, 0.01), (z, 0.38, 0.01), (ih, 0.39, 0.01), (d, 0.4, 0.01), (iy, 0.52, 0.01). Occurrence 2: (p, 0.45, 0.01), (r, 0.46, 0.01), (aa, 0.47, 0.01), (z, 0.48, 0.01), (ih, 0.49, 0.01), (d, 0.5, 0.01), (iy, 0.51, 0.01). According to our formula, the score of the first occurrence is 0.83 and the score of the second occurrence is 1. In the first occurrence, there are probably some insertion or silence between the phone p and r, and between the phone d and iy. The silence can be due to the fact that the phones belongs to two different words ans therefore, it is not an occurrence of the term prosody. 3.4.3 Combination The score of an occurrence of a query Q at time t0 in the document D is determined by the multiplication of the score of each keyword ki, where each ki occurs at time ti with a duration di in the transcript D: score(Q, t0, D) = n i=0 score(ki, ti, D)γn Note that according to what we have ex-plained in Section 3.3, we have ∀1 ≤ i ≤ n, 0 < ti −(ti−1 +di−1) < 0.5 sec. Our goal is to estimate for each found occurrence how likely the query appears. It is different from classical IR that aims to rank the results and not to score them. Since the probability to have a false alarm is inversely proportional to the length of the phrase query, we have boosted the score of queries by a γn exponent, that is related to the number of keywords in the phrase. We have determined empirically the value of γn = 1/n. The begin time of the query occurrence is determined by the begin time t0 of the first query term and the duration of the query occurrence by tn − t0 + dn. 4. EXPERIMENTS 4.1 Experimental setup Our corpus consists of the evaluation set provided by NIST for the STD 2006 evaluation [1]. It includes three different source types in US English: three hours of broadcast news (BNEWS), three hours of conversational telephony speech (CTS) and two hours of conference room meetings (CONFMTG). As shown in Section 4.2, these different collections have different accuracies. CTS and CONFMTG are spontaneous speech. For the experiments, we have processed the query set provided by NIST that includes 1100 queries. Each query is a phrase containing between one to five terms, common and rare terms, terms that are in the manual transcripts and those that are not. Testing and determination of empirical values have been achieved on another set of speech data and queries, the development set, also provided by NIST. We have used the IBM research prototype ASR system, described in [26], for transcribing speech data. We have produced WCNs for the three different source types. 1-best phonetic transcripts were generated only for BNEWS and CTS, since CONFMTG phonetic transcripts have too low accuracy. We have adapted Juru [7], a full-text search library written in Java, to index the transcripts and to store the timestamps of the words and phones; search results have been retrieved as described in Section 3. For each found occurrence of the given query, our system outputs: the location of the term in the audio recording (begin time and duration), the score indicating how likely is the occurrence of query, (as defined in Section 3.4) and a hard (binary) decision as to whether the detection is correct. We measure precision and recall by comparing the results obtained over the automatic transcripts (only the results having true hard decision) to the results obtained over the reference manual transcripts. Our aim is to evaluate the ability of the suggested retrieval approach to handle transcribed speech data. Thus, the closer the automatic results to the manual results is, the better the search effectiveness over the automatic transcripts will be. The results returned from the manual transcription for a given query are considered relevant and are expected to be retrieved with highest scores. This approach for measuring search effectiveness using manual data as a reference is very common in speech retrieval research [25, 22, 8, 9, 17]. Beside the recall and the precision, we use the evaluation measures defined by NIST for the 2006 STD evaluation [2]: the Actual Term-Weighted Value (ATWV) and the Maximum Term-Weighted Value (MTWV). The term-weighted value (TWV) is computed by first computing the miss and false alarm probabilities for each query separately, then using these and an (arbitrarily chosen) prior probability to compute query-specific values, and finally averaging these query-specific values over all queries q to produce an overall system value: TWV (θ) = 1 − averageq{Pmiss(q, θ) + β × PF A(q, θ)} where β = C V (Pr−1 q − 1). θ is the detection threshold. For the evaluation, the cost/value ratio, C/V , has been determined to 0.1 and the prior probability of a query Prq to 10−4 . Therefore, β = 999.9. Miss and false alarm probabilities for a given query q are functions of θ: Pmiss(q, θ) = 1 − Ncorrect(q, θ) Ntrue(q) PF A(q, θ) = Nspurious(q, θ) NNT (q) corpus WER(%) SUBR(%) DELR(%) INSR(%) BNEWS WCN 12.7 49 42 9 CTS WCN 19.6 51 38 11 CONFMTG WCN 47.4 47 49 3 Table 1: WER and distribution of the error types over word 1-best path extracted from WCNs for the different source types. where: • Ncorrect(q, θ) is the number of correct detections (retrieved by the system) of the query q with a score greater than or equal to θ. • Nspurious(q, θ) is the number of spurious detections of the query q with a score greater than or equal to θ. • Ntrue(q) is the number of true occurrences of the query q in the corpus. • NNT (q) is the number of opportunities for incorrect detection of the query q in the corpus; it is the NonTarget query trials. It has been defined by the following formula: NNT (q) = Tspeech − Ntrue(q). Tspeech is the total amount of speech in the collection (in seconds). ATWV is the actual term-weighted value; it is the detection value attained by the system as a result of the system output and the binary decision output for each putative occurrence. It ranges from −∞ to +1. MTWV is the maximum term-weighted value over the range of all possible values of θ. It ranges from 0 to +1. We have also provided the detection error tradeoff (DET) curve [19] of miss probability (Pmiss) vs. false alarm probability (PF A). We have used the STDEval tool to extract the relevant results from the manual transcripts and to compute ATWV, MTWV and the DET curve. We have determined empirically the following values for the boosting vector defined in Section 3.4: Bi = 1 i . 4.2 WER analysis We use the word error rate (WER) in order to characterize the accuracy of the transcripts. WER is defined as follows: S + D + I N × 100 where N is the total number of words in the corpus, and S, I, and D are the total number of substitution, insertion, and deletion errors, respectively. The substitution error rate (SUBR) is defined by S S + D + I × 100. Deletion error rate (DELR) and insertion error rate (INSR) are defined in a similar manner. Table 1 gives the WER and the distribution of the error types over 1-best path transcripts extracted from WCNs. The WER of the 1-best path phonetic transcripts is approximately two times worse than the WER of word transcripts. That is the reason why we have not retrieved from phonetic transcripts on CONFMTG speech data. 4.3 Theta threshold We have determined empirically a detection threshold θ per source type and the hard decision of the occurrences having a score less than θ is set to false; false occurrences returned by the system are not considered as retrieved and therefore, are not used for computing ATWV, precision and recall. The value of the threshold θ per source type is reported in Table 2. It is correlated to the accuracy of the transcripts. Basically, setting a threshold aims to eliminate from the retrieved occurrences, false alarms without adding misses. The higher the WER is, the higher the θ threshold should be. BNEWS CTS CONFMTG 0.4 0.61 0.91 Table 2: Values of the θ threshold per source type. 4.4 Processing resource profile We report in Table 3 the processing resource profile. Concerning the index size, note that our index is compressed using IR index compression techniques. The indexing time includes both audio processing (generation of word and phonetic transcripts) and building of the searchable indices. Index size 0.3267 MB/HS Indexing time 7.5627 HP/HS Index Memory Usage 1653.4297 MB Search speed 0.0041 sec.P/HS Search Memory Usage 269.1250 MB Table 3: Processing resource profile. (HS: Hours of Speech. HP: Processing Hours. sec.P: Processing seconds) 4.5 Retrieval measures We compare our approach (WCN phonetic) presented in Section 4.1 with another approach (1-best-WCN phonetic). The only difference between these two approaches is that, in 1-best-WCN phonetic, we index only the 1-best path extracted from the WCN instead of indexing all the WCN. WCN phonetic was our primary system for the evaluation and 1-best-WCN phonetic was one of our contrastive systems. Average precision and recall, MTWV and ATWV on the 1100 queries are given in Table 4. We provide also the DET curve for WCN phonetic approach in Figure 2. The point that maximizes the TWV, the MTWV, is specified on each curve. Note that retrieval performance has been evaluated separately for each source type since the accuracy of the speech differs per source type as shown in Section 4.2. As expected, we can see that MTWV and ATWV decrease in higher WER. The retrieval performance is improved when measure BNEWS CTS CONFMTG WCN phonetic ATWV 0.8485 0.7392 0.2365 MTWV 0.8532 0.7408 0.2508 precision 0.94 0.90 0.65 recall 0.89 0.81 0.37 1-best-WCN phonetic ATWV 0.8279 0.7102 0.2381 MTWV 0.8319 0.7117 0.2512 precision 0.95 0.91 0.66 recall 0.84 0.75 0.37 Table 4: ATWV, MTWV, precision and recall per source type. Figure 2: DET curve for WCN phonetic approach. using WCNs relatively to 1-best path. It is due to the fact that miss probability is improved by indexing all the hypotheses provided by the WCNs. This observation confirms the results shown by Mamou et al. [17] in the context of spoken document retrieval. The ATWV that we have obtained is close to the MTWV; we have combined our ranking model with appropriate threshold θ to eliminate results with lower score. Therefore, the effect of false alarms added by WCNs is reduced. WCN phonetic approach was used in the recent NIST STD evaluation and received the highest overall ranking among eleven participants. For comparison, the system that ranked at the third place, obtained an ATWV of 0.8238 for BNEWS, 0.6652 for CTS and 0.1103 for CONFMTG. 4.6 Influence of the duration of the query on the retrieval performance We have analysed the retrieval performance according to the average duration of the occurrences in the manual transcripts. The query set was divided into three different quantiles according to the duration; we have reported in Table 5 ATWV and MTWV according to the duration. We can see that we performed better on longer queries. One of the reasons is the fact that the ASR system is more accurate on long words. Hence, it was justified to boost the score of the results with the exponent γn, as explained in Section 3.4.3, according to the length of the query. quantile 0-33 33-66 66-100 BNEWS ATWV 0.7655 0.8794 0.9088 MTWV 0.7819 0.8914 0.9124 CTS ATWV 0.6545 0.8308 0.8378 MTWV 0.6551 0.8727 0.8479 CONFMTG ATWV 0.1677 0.3493 0.3651 MTWV 0.1955 0.4109 0.3880 Table 5: ATWV, MTWV according to the duration of the query occurrences per source type. 4.7 OOV vs. IV query processing We have randomly chosen three sets of queries from the query sets provided by NIST: 50 queries containing only IV terms; 50 queries containing only OOV terms; and 50 hybrid queries containing both IV and OOV terms. The following experiment has been achieved on the BNEWS collection and IV and OOV terms has been determined according to the vocabulary of BNEWS ASR system. We would like to compare three different approaches of retrieval: using only word index; using only phonetic index; combining word and phonetic indices. Table 6 summarizes the retrieval performance according to each approach and to each type of queries. Using a word-based approach for dealing with OOV and hybrid queries affects drastically the performance of the retrieval; precision and recall are null. Using a phone-based approach for dealing with IV queries affects also the performance of the retrieval relatively to the word-based approach. As expected, the approach combining word and phonetic indices presented in Section 3 leads to the same retrieval performance as the word approach for IV queries and to the same retrieval performance as the phonetic approach for OOV queries. This approach always outperforms the others and it justifies the fact that we need to combine word and phonetic search. 5. RELATED WORK In the past decade, the research efforts on spoken data retrieval have focused on extending classical IR techniques to spoken documents. Some of these works have been done in the context of the TREC Spoken Document Retrieval evaluations and are described by Garofolo et al. [12]. An LVCSR system is used to transcribe the speech into 1-best path word transcripts. The transcripts are indexed as clean text: for each occurrence, its document, its word offset and additional information are stored in the index. A generic IR system over the text is used for word spotting and search as described by Brown et al. [6] and James [14]. This stratindex word phonetic word and phonetic precision recall precision recall precision recall IV queries 0.8 0.96 0.11 0.77 0.8 0.96 OOV queries 0 0 0.13 0.79 0.13 0.79 hybrid queries 0 0 0.15 0.71 0.89 0.83 Table 6: Comparison of word and phonetic approach on IV and OOV queries egy works well for transcripts like broadcast news collections that have a low WER (in the range of 15%-30%) and are redundant by nature (the same piece of information is spoken several times in different manners). Moreover, the algorithms have been mostly tested over long queries stated in plain English and retrieval for such queries is more robust against speech recognition errors. An alternative approach consists of using word lattices in order to improve the effectiveness of SDR. Singhal et al. [24, 25] propose to add some terms to the transcript in order to alleviate the retrieval failures due to ASR errors. From an IR perspective, a classical way to bring new terms is document expansion using a similar corpus. Their approach consists in using word lattices in order to determine which words returned by a document expansion algorithm should be added to the original transcript. The necessity to use a document expansion algorithm was justified by the fact that the word lattices they worked with, lack information about word probabilities. Chelba and Acero in [8, 9] propose a more compact word lattice, the position specific posterior lattice (PSPL). This data structure is similar to WCN and leads to a more compact index. The offset of the terms in the speech documents is also stored in the index. However, the evaluation framework is carried out on lectures that are relatively planned, in contrast to conversational speech. Their ranking model is based on the term confidence level but does not take into consideration the rank of the term among the other hypotheses. Mamou et al. [17] propose a model for spoken document retrieval using WCNs in order to improve the recall and the MAP of the search. However, in the above works, the problem of queries containing OOV terms is not addressed. Popular approaches to deal with OOV queries are based on sub-words transcripts, where the sub-words are typically phones, syllables or word fragments (sequences of phones) [11, 20, 23]. The classical approach consists of using phonetic transcripts. The transcripts are indexed in the same manner as words in using classical text retrieval techniques; during query processing, the query is represented as a sequence of phones. The retrieval is based on searching the string of phones representing the query in the phonetic transcript. To account for the high recognition error rates, some other systems use richer transcripts like phonetic lattices. They are attractive as they accommodate high error rate conditions as well as allow for OOV queries to be used [15, 3, 20, 23, 21, 27]. However, phonetic lattices contain many edges that overlap in time with the same phonetic label, and are difficult to index. Moreover, beside the improvement in the recall of the search, the precision is affected since phonetic lattices are often inaccurate. Consequently, phonetic approaches should be used only for OOV search; for searching queries containing also IV terms, this technique affects the performance of the retrieval in comparison to the word based approach. Saraclar and Sproat in [22] show improvement in word spotting accuracy for both IV and OOV queries, using phonetic and word lattices, where a confidence measure of a word or a phone can be derived. They propose three different retrieval strategies: search both the word and the phonetic indices and unify the two different sets of results; search the word index for IV queries, search the phonetic index for OOV queries; search the word index and if no result is returned, search the phonetic index. However, no strategy is proposed to deal with phrase queries containing both IV and OOV terms. Amir et al. in [5, 4] propose to merge a word approach with a phonetic approach in the context of video retrieval. However, the phonetic transcript is obtained from a text to phonetic conversion of the 1-best path of the word transcript and is not based on a phonetic decoding of the speech data. An important issue to be considered when looking at the state-of-the-art in retrieval of spoken data, is the lack of a common test set and appropriate query terms. This paper uses such a task and the STD evaluation is a good summary of the performance of different approaches on the same test conditions. 6. CONCLUSIONS This work studies how vocabulary independent spoken term detection can be performed efficiently over different data sources. Previously, phonetic-based and word-based approaches have been used for IR on speech data. The former suffers from low accuracy and the latter from limited vocabulary of the recognition system. In this paper, we have presented a vocabulary independent model of indexing and search that combines both the approaches. The system can deal with all kinds of queries although the phrases that need to combine for the retrieval, information extracted from two different indices, a word index and a phonetic index. The scoring of OOV terms is based on the proximity (in time) between the different phones. The scoring of IV terms is based on information provided by the WCNs. We have shown an improvement in the retrieval performance when using all the WCN and not only the 1-best path and when using phonetic index for search of OOV query terms. This approach always outperforms the other approaches using only word index or phonetic index. As a future work, we will compare our model for OOV search on phonetic transcripts with a retrieval model based on the edit distance. 7. ACKNOWLEDGEMENTS Jonathan Mamou is grateful to David Carmel and Ron Hoory for helpful and interesting discussions. 8. REFERENCES [1] NIST Spoken Term Detection 2006 Evaluation Website, http://www.nist.gov/speech/tests/std/. [2] NIST Spoken Term Detection (STD) 2006 Evaluation Plan, http://www.nist.gov/speech/tests/std/docs/std06-evalplan-v10.pdf. [3] C. Allauzen, M. Mohri, and M. Saraclar. General indexation of weighted automata - application to spoken utterance retrieval. In Proceedings of the HLT-NAACL 2004 Workshop on Interdiciplinary Approaches to Speech Indexing and Retrieval, Boston, MA, USA, 2004. [4] A. Amir, M. Berg, and H. Permuter. Mutual relevance feedback for multimodal query formulation in video retrieval. In MIR ``05: Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, pages 17-24, New York, NY, USA, 2005. ACM Press. [5] A. Amir, A. Efrat, and S. Srinivasan. Advances in phonetic word spotting. In CIKM ``01: Proceedings of the tenth international conference on Information and knowledge management, pages 580-582, New York, NY, USA, 2001. ACM Press. [6] M. Brown, J. Foote, G. Jones, K. Jones, and S. Young. Open-vocabulary speech indexing for voice and video mail retrieval. In Proceedings ACM Multimedia 96, pages 307-316, Hong-Kong, November 1996. [7] D. Carmel, E. Amitay, M. Herscovici, Y. S. Maarek, Y. Petruschka, and A. Soffer. Juru at TREC 10Experiments with Index Pruning. In Proceedings of the Tenth Text Retrieval Conference (TREC-10). National Institute of Standards and Technology. NIST, 2001. [8] C. Chelba and A. Acero. Indexing uncertainty for spoken document search. In Interspeech 2005, pages 61-64, Lisbon, Portugal, 2005. [9] C. Chelba and A. Acero. Position specific posterior lattices for indexing speech. In Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics (ACL), Ann Arbor, MI, 2005. [10] S. Chen. Conditional and joint models for grapheme-to-phoneme conversion. In Eurospeech 2003, Geneva, Switzerland, 2003. [11] M. Clements, S. Robertson, and M. Miller. Phonetic searching applied to on-line distance learning modules. In Digital Signal Processing Workshop, 2002 and the 2nd Signal Processing Education Workshop. Proceedings of 2002 IEEE 10th, pages 186-191, 2002. [12] J. Garofolo, G. Auzanne, and E. Voorhees. The TREC spoken document retrieval track: A success story. In Proceedings of the Ninth Text Retrieval Conference (TREC-9). National Institute of Standards and Technology. NIST, 2000. [13] D. Hakkani-Tur and G. Riccardi. A general algorithm for word graph matrix decomposition. In Proceedings of the IEEE Internation Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 596-599, Hong-Kong, 2003. [14] D. James. The application of classical information retrieval techniques to spoken documents. PhD thesis, University of Cambridge, Downing College, 1995. [15] D. A. James. A system for unrestricted topic retrieval from radio news broadcasts. In Proc. ICASSP ``96, pages 279-282, Atlanta, GA, 1996. [16] B. Logan, P. Moreno, J. V. Thong, and E. Whittaker. An experimental study of an audio indexing system for the web. In Proceedings of ICSLP, 1996. [17] J. Mamou, D. Carmel, and R. Hoory. Spoken document retrieval from call-center conversations. In SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 51-58, New York, NY, USA, 2006. ACM Press. [18] L. Mangu, E. Brill, and A. Stolcke. Finding consensus in speech recognition: word error minimization and other applications of confusion networks. Computer Speech and Language, 14(4):373-400, 2000. [19] A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki. The DET curve in assessment of detection task performance. In Proc. Eurospeech ``97, pages 1895-1898, Rhodes, Greece, 1997. [20] K. Ng and V. W. Zue. Subword-based approaches for spoken document retrieval. Speech Commun., 32(3):157-186, 2000. [21] Y. Peng and F. Seide. Fast two-stage vocabulary-independent search in spontaneous speech. In Acoustics, Speech, and Signal Processing. Proceedings. (ICASSP). IEEE International Conference, volume 1, pages 481-484, 2005. [22] M. Saraclar and R. Sproat. Lattice-based search for spoken utterance retrieval. In HLT-NAACL 2004: Main Proceedings, pages 129-136, Boston, Massachusetts, USA, 2004. [23] F. Seide, P. Yu, C. Ma, and E. Chang. Vocabulary-independent search in spontaneous speech. In ICASSP-2004, IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004. [24] A. Singhal, J. Choi, D. Hindle, D. Lewis, and F. Pereira. AT&T at TREC-7. In Proceedings of the Seventh Text Retrieval Conference (TREC-7). National Institute of Standards and Technology. NIST, 1999. [25] A. Singhal and F. Pereira. Document expansion for speech retrieval. In SIGIR ``99: Proceedings of the 22nd annual international ACM SIGIR conference on research and development in information retrieval, pages 34-41, New York, NY, USA, 1999. ACM Press. [26] H. Soltau, B. Kingsbury, L. Mangu, D. Povey, G. Saon, and G. Zweig. The IBM 2004 conversational telephony system for rich transcription. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2005. [27] K. Thambiratnam and S. Sridharan. Dynamic match phone-lattice searches for very fast and accurate unrestricted vocabulary keyword spotting. In Acoustics, Speech, and Signal Processing. Proceedings. (ICASSP). IEEE International Conference, 2005. [28] P. C. Woodland, S. E. Johnson, P. Jourlin, and K. S. Jones. Effects of out of vocabulary words in spoken document retrieval (poster session). In SIGIR ``00: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 372-374, New York, NY, USA, 2000. ACM Press.
Vocabulary Independent Spoken Term Detection ABSTRACT We are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings. Today, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index. However, query terms that are not part of the recognizer's vocabulary cannot be retrieved, and the recall of the search is affected. In addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically. Such phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts. We present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts. A speech recognizer generates word confusion networks and phonetic lattices. The transcripts are indexed for query processing and ranking purpose. The value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1]. 1. INTRODUCTION The rapidly increasing amount of spoken data calls for solutions to index and search this data. The classical approach consists of converting the speech to word transcripts using a large vocabulary continuous speech recognition (LVCSR) tool. In the past decade, most of the research efforts on spoken data retrieval have focused on extending classical IR techniques to word transcripts. Some of these works have been done in the framework of the NIST TREC Spoken Document Retrieval tracks and are described by Garofolo et al. [12]. These tracks focused on retrieval from a corpus of broadcast news stories spoken by professionals. One of the conclusions of those tracks was that the effectiveness of retrieval mostly depends on the accuracy of the transcripts. While the accuracy of automatic speech recognition (ASR) systems depends on the scenario and environment, state-of-the-art systems achieved better than 90% accuracy in transcription of such data. In 2000, Garofolo et al. concluded that "Spoken document retrieval is a solved problem" [12]. However, a significant drawback of such approaches is that search on queries containing out-of-vocabulary (OOV) terms will not return any results. OOV terms are missing words from the ASR system vocabulary and are replaced in the output transcript by alternatives that are probable, given the recognition acoustic model and the language model. It has been experimentally observed that over 10% of user queries can contain OOV terms [16], as queries often relate to named entities that typically have a poor coverage in the ASR vocabulary. The effects of OOV query terms in spoken data retrieval are discussed by Woodland et al. [28]. In many applications the OOV rate may get worse over time unless the recognizer's vocabulary is periodically updated. Another approach consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones. The retrieval is based on searching the sequence of phones representing the query in the phonetic transcripts. The main drawback of this approach is the inherent high error rate of the transcripts. Therefore, such approach cannot be an alternative to word transcripts, especially for in-vocabulary (IV) query terms that are part of the vocabulary of the ASR system. A solution would be to combine the two different approaches presented above: we index both word transcripts and phonetic transcripts; during query processing, the information is retrieved from the word index for IV terms and from the phonetic index for OOV terms. We would like to be able to process also hybrid queries, i.e, queries that include both IV and OOV terms. Consequently, we need to merge pieces of information retrieved from word index and phonetic index. Proximity information on the occurrences of the query terms is required for phrase search and for proximity-based ranking. In classical IR, the index stores for each occurrence of a term, its offset. Therefore, we cannot merge posting lists retrieved by phonetic index with those retrieved by word index since the offset of the occurrences retrieved from the two different indices are not comparable. The only element of comparison between phonetic and word transcripts are the timestamps. No previous work combining word and phonetic approach has been done on phrase search. We present a novel scheme for information retrieval that consists of storing, during the indexing process, for each unit of indexing (phone or word) its timestamp. We search queries by merging the information retrieved from the two different indices, word index and phonetic index, according to the timestamps of the query terms. We analyze the retrieval effectiveness of this approach on the NIST Spoken Term Detection 2006 evaluation data [1]. The paper is organized as follows. We describe the audio processing in Section 2. The indexing and retrieval methods are presented in section 3. Experimental setup and results are given in Section 4. In Section 5, we give an overview of related work. Finally, we conclude in Section 6. 2. AUTOMATIC SPEECH RECOGNITION SYSTEM We use an ASR system for transcribing speech data. It works in speaker-independent mode. For best recognition results, a speaker-independent acoustic model and a language model are trained in advance on data with similar characteristics. Typically, ASR generates lattices that can be considered as directed acyclic graphs. Each vertex in a lattice is associated with a timestamp and each edge (u, v) is labeled with a word or phone hypothesis and its prior probability, which is the probability of the signal delimited by the timestamps of the vertices u and v, given the hypothesis. The 1-best path transcript is obtained from the lattice using dynamic programming techniques. Mangu et al. [18] and Hakkani-Tur et al. [13] propose a compact representation of a word lattice called word confusion network (WCN). Each edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal. One of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice. As explained in [13], the three main steps for building a WCN from a word lattice are as follows: 1. Compute the posterior probabilities for all edges in the word lattice. 2. Extract a path from the word lattice (which can be the 1-best, the longest or any random path), and call it the pivot path of the alignment. 3. Traverse the word lattice, and align all the transitions with the pivot, merging the transitions that correspond to the same word (or label) and occur in the same time interval by summing their posterior probabilities. The 1-best path of a WCN is obtained from the path containing the best hypotheses. As stated in [18], although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice. Typical structures of a lattice and a WCN are given in Figure 1. Figure 1: Typical structures of a lattice and a WCN. 3. RETRIEVAL MODEL The main problem with retrieving information from spoken data is the low accuracy of the transcription particularly on terms of interest such as named entities and content words. Generally, the accuracy of a word transcript is characterized by its word error rate (WER). There are three kinds of errors that can occur in a transcript: substitution of a term that is part of the speech by another term, deletion of a spoken term that is part of the speech and insertion of a term that is not part of the speech. Substitutions and deletions reflect the fact that an occurrence of a term in the speech signal is not recognized. These misses reduce the recall of the search. Substitutions and insertions reflect the fact that a term which is not part of the speech signal appears in the transcript. These misses reduce the precision of the search. Search recall can be enhanced by expanding the transcript with extra words. These words can be taken from the other alternatives provided by the WCN; these alternatives may have been spoken but were not the top choice of the ASR. Such an expansion tends to correct the substitutions and the deletions and consequently, might improve recall but will probably reduce precision. Using an appropriate ranking model, we can avoid the decrease in precision. Mamou et al. have presented in [17] the enhancement in the recall and the MAP by searching on WCN instead of considering only the 1-best path word transcript in the context of spoken document retrieval. We have adapted this model of IV search to term detection. In word transcripts, OOV terms are deleted or substituted. Therefore, the usage of phonetic transcripts is more desirable. However, due to their low accuracy, we have preferred to use only the 1-best path extracted from the phonetic lattices. We will show that the usage of phonetic transcripts tends to improve the recall without affecting the precision too much, using an appropriate ranking. 3.1 Spoken document detection task As stated in the STD 2006 evaluation plan [2], the task consists in finding all the exact matches of a specific query in a given corpus of speech data. A query is a phrase containing several words. The queries are text and not speech. Note that this task is different from the more classical task of spoken document retrieval. Manual transcripts of the speech are not provided but are used by the evaluators to find true occurrences. By definition, true occurrences of a query are found automatically by searching the manual transcripts using the following rule: the gap between adjacent words in a query must be less than 0.5 seconds in the corresponding speech. For evaluating the results, each system output occurrence is judged as correct or not according to whether it is "close" in time to a true occurrence of the query retrieved from manual transcripts; it is judged as correct if the midpoint of the system output occurrence is less than or equal to 0.5 seconds from the time span of a true occurrence of the query. 3.2 Indexing We have used the same indexing process for WCN and phonetic transcripts. Each occurrence of a unit of indexing (word or phone) u in a transcript D is indexed with the following information: • the begin time t of the occurrence of u, • the duration d of the occurrence of u. In addition, for WCN indexing, we store • the confidence level of the occurrence of u at the time t that is evaluated by its posterior probability Pr (u | t, D), • the rank of the occurrence of u among the other hypotheses beginning at the same time t, rank (u | t, D). Note that since the task is to find exact matches of the phrase queries, we have not filtered stopwords and the corpus is not stemmed before indexing. 3.3 Search In the following, we present our approach for accomplishing the STD task using the indices described above. The terms are extracted from the query. The vocabulary of the ASR system building word transcripts is given. Terms that are part of this vocabulary are IV terms; the other terms are OOV. For an IV query term, the posting list is extracted from the word index. For an OOV query term, the term is converted to a sequence of phones using a joint maximum entropy N-gram model [10]. For example, the term prosody is converted to the sequence of phones (p, r, aa, z, ih, d, iy). The posting list of each phone is extracted from the phonetic index. The next step consists of merging the different posting lists according to the timestamp of the occurrences in order to create results matching the query. First, we check that the words and phones appear in the right order according to their begin times. Second, we check that the gap in time between adjacent words and phones is "reasonable". Conforming to the requirements of the STD evaluation, the distance in time between two adjacent query terms must be less than 0.5 seconds. For OOV search, we check that the distance in time between two adjacent phones of a query term is less that 0.2 seconds; this value has been determined empirically. In such a way, we can reduce the effect of insertion errors since we allow insertions between the adjacent words and phones. Our query processing does not allow substitutions and deletions. Example: Let us consider the phrase query prosody research. The term prosody is OOV and the term research is IV. The term prosody is converted to the sequence of phones (p, r, aa, z, ih, d, iy). The posting list of each phone is extracted from the phonetic index. We merge the posting lists of the phones such that the sequence of phones appears in the right order and the gap in time between the pairs of phones (p, r), (r, aa), (aa, z), (z, ih), (ih, d), (d, iy) is less than 0.2 seconds. We obtain occurrences of the term prosody. The posting list of research is extracted from the word index and we merge it with the occurrences found for prosody such that they appear in the right order and the distance in time between prosody and research is less than 0.5 seconds. Note that our indexing model allows to search for different types of queries: 1. queries containing only IV terms using the word index. 2. queries containing only OOV terms using the phonetic index. 3. keyword queries containing both IV and OOV terms using the word index for IV terms and the phonetic index for OOV terms; for query processing, the different sets of matches are unified if the query terms have OR semantics and intersected if the query terms have AND semantics. 4. phrase queries containing both IV and OOV terms; for query processing, the posting lists of the IV terms retrieved from the word index are merged with the posting lists of the OOV terms retrieved from the phonetic index. The merging is possible since we have stored the timestamps for each unit of indexing (word and phone) in both indices. The STD evaluation has focused on the fourth query type. It is the hardest task since we need to combine posting lists retrieved from phonetic and word indices. 3.4 Ranking Since IV terms and OOV terms are retrieved from two different indices, we propose two different functions for scoring an occurrence of a term; afterward, an aggregate score is assigned to the query based on the scores of the query terms. Because the task is term detection, we do not use a document frequency criterion for ranking the occurrences. Let us consider a query Q = (k0,..., kn), associated with a boosting vector B = (B1,..., Bj). This vector associates a boosting factor to each rank of the different hypotheses; the boosting factors are normalized between 0 and 1. If the rank r is larger than j, we assume Br = 0. 3.4.1 In vocabulary term ranking For IV term ranking, we extend the work of Mamou et al. [17] on spoken document retrieval to term detection. We use the information provided by the word index. We define the score score (k, t, D) of a keyword k occurring at a time t in the transcript D, by the following formula: score (k, t, D) = Brank (k | t, D) × Pr (k | t, D) Note that 0 <score (k, t, D) <1. 3.4.2 Out of vocabulary term ranking For OOV term ranking, we use the information provided by the phonetic index. We give a higher rank to occurrences of OOV terms that contain phones close (in time) to each other. We define a scoring function that is related to the average gap in time between the different phones. Let us consider a keyword k converted to the sequence of phones (pk0,..., pkl). We define the normalized score score (k, tk0, D) of a keyword k = (pk0,..., pkl), where each pki occurs at time tki with a duration of dki in the transcript D, by the following formula: l Note that according to what we have ex-plained in Section 3.3, we have ` d1 <i <l, 0 <tki--(tki − 1 + dki − 1) <0.2 sec, 0 <5 x (tki--(tki − 1 + dki − 1)) <1, and consequently, 0 <score (k, tk0, D) <1. The duration of the keyword occurrence is tkl--tk0 + dkl. Example: let us consider the sequence (p, r, aa, z, ih, d, iy) and two different occurrences of the sequence. For each phone, we give the begin time and the duration in second. Occurrence 1: (p, 0.25, 0.01), (r, 0.36, 0.01), (aa, 0.37, 0.01), (z, 0.38, 0.01), (ih, 0.39, 0.01), (d, 0.4, 0.01), (iy, 0.52, 0.01). Occurrence 2: (p, 0.45, 0.01), (r, 0.46, 0.01), (aa, 0.47, 0.01), (z, 0.48, 0.01), (ih, 0.49, 0.01), (d, 0.5, 0.01), (iy, 0.51, 0.01). According to our formula, the score of the first occurrence is 0.83 and the score of the second occurrence is 1. In the first occurrence, there are probably some insertion or silence between the phone p and r, and between the phone d and iy. The silence can be due to the fact that the phones belongs to two different words ans therefore, it is not an occurrence of the term prosody. 3.4.3 Combination The score of an occurrence of a query Q at time t0 in the document D is determined by the multiplication of the score of each keyword ki, where each ki occurs at time ti with a duration di in the transcript D: Note that according to what we have ex-plained in Section 3.3, we have ` d1 <i <n, 0 <ti--(ti − 1 + di − 1) <0.5 sec. Our goal is to estimate for each found occurrence how likely the query appears. It is different from classical IR that aims to rank the results and not to score them. Since the probability to have a false alarm is inversely proportional to the length of the phrase query, we have boosted the score of queries by a γn exponent, that is related to the number of keywords in the phrase. We have determined empirically the value of γn = 1/n. The begin time of the query occurrence is determined by the begin time t0 of the first query term and the duration of the query occurrence by tn--t0 + dn. 4. EXPERIMENTS 4.1 Experimental setup Our corpus consists of the evaluation set provided by NIST for the STD 2006 evaluation [1]. It includes three different source types in US English: three hours of broadcast news (BNEWS), three hours of conversational telephony speech (CTS) and two hours of conference room meetings (CONFMTG). As shown in Section 4.2, these different collections have different accuracies. CTS and CONFMTG are spontaneous speech. For the experiments, we have processed the query set provided by NIST that includes 1100 queries. Each query is a phrase containing between one to five terms, common and rare terms, terms that are in the manual transcripts and those that are not. Testing and determination of empirical values have been achieved on another set of speech data and queries, the development set, also provided by NIST. We have used the IBM research prototype ASR system, described in [26], for transcribing speech data. We have produced WCNs for the three different source types. 1-best phonetic transcripts were generated only for BNEWS and CTS, since CONFMTG phonetic transcripts have too low accuracy. We have adapted Juru [7], a full-text search library written in Java, to index the transcripts and to store the timestamps of the words and phones; search results have been retrieved as described in Section 3. For each found occurrence of the given query, our system outputs: the location of the term in the audio recording (begin time and duration), the score indicating how likely is the occurrence of query, (as defined in Section 3.4) and a hard (binary) decision as to whether the detection is correct. We measure precision and recall by comparing the results obtained over the automatic transcripts (only the results having true hard decision) to the results obtained over the reference manual transcripts. Our aim is to evaluate the ability of the suggested retrieval approach to handle transcribed speech data. Thus, the closer the automatic results to the manual results is, the better the search effectiveness over the automatic transcripts will be. The results returned from the manual transcription for a given query are considered relevant and are expected to be retrieved with highest scores. This approach for measuring search effectiveness using manual data as a reference is very common in speech retrieval research [25, 22, 8, 9, 17]. Beside the recall and the precision, we use the evaluation measures defined by NIST for the 2006 STD evaluation [2]: the Actual Term-Weighted Value (ATWV) and the Maximum Term-Weighted Value (MTWV). The term-weighted value (TWV) is computed by first computing the miss and false alarm probabilities for each query separately, then using these and an (arbitrarily chosen) prior probability to compute query-specific values, and finally averaging these query-specific values over all queries q to produce an overall system value: where β = VC (Pr − 1 q--1). θ is the detection threshold. For the evaluation, the cost/value ratio, C/V, has been determined to 0.1 and the prior probability of a query Prq to 10 − 4. Therefore, β = 999.9. Miss and false alarm probabilities for a given query q are functions of θ: Table 1: WER and distribution of the error types over word 1-best path extracted from WCNs for the different source types. where: 9 Ncorrect (q, 0) is the number of correct detections (retrieved by the system) of the query q with a score greater than or equal to 0. 9 Nspurious (q, 0) is the number of spurious detections of the query q with a score greater than or equal to 0. 9 Ntrue (q) is the number of true occurrences of the query q in the corpus. 9 NNT (q) is the number of opportunities for incorrect detection of the query q in the corpus; it is the" NonTarget" query trials. It has been defined by the following formula: NNT (q) = Tspeech − Ntrue (q). Tspeech is the total amount of speech in the collection (in seconds). ATWV is the" actual term-weighted value"; it is the detection value attained by the system as a result of the system output and the binary decision output for each putative occurrence. It ranges from − ∞ to +1. MTWV is the" maximum term-weighted value" over the range of all possible values of 0. It ranges from 0 to +1. We have also provided the detection error tradeoff (DET) curve [19] of miss probability (Pmiss) vs. false alarm probability (PF A). We have used the STDEval tool to extract the relevant results from the manual transcripts and to compute ATWV, MTWV and the DET curve. We have determined empirically the following values for the boosting vector defined in Section 3.4: Bi = 1i. 4.2 WER analysis We use the word error rate (WER) in order to characterize the accuracy of the transcripts. WER is defined as follows: where N is the total number of words in the corpus, and S, I, and D are the total number of substitution, insertion, and deletion errors, respectively. The substitution error rate (SUBR) is defined by Deletion error rate (DELR) and insertion error rate (INSR) are defined in a similar manner. Table 1 gives the WER and the distribution of the error types over 1-best path transcripts extracted from WCNs. The WER of the 1-best path phonetic transcripts is approximately two times worse than the WER of word transcripts. That is the reason why we have not retrieved from phonetic transcripts on CONFMTG speech data. 4.3 Theta threshold We have determined empirically a detection threshold 0 per source type and the hard decision of the occurrences having a score less than 0 is set to false; false occurrences returned by the system are not considered as retrieved and therefore, are not used for computing ATWV, precision and recall. The value of the threshold 0 per source type is reported in Table 2. It is correlated to the accuracy of the transcripts. Basically, setting a threshold aims to eliminate from the retrieved occurrences, false alarms without adding misses. The higher the WER is, the higher the 0 threshold should be. BNEWS CTS CONFMTG 0.4 0.61 0.91 Table 2: Values of the 0 threshold per source type. 4.4 Processing resource profile We report in Table 3 the processing resource profile. Concerning the index size, note that our index is compressed using IR index compression techniques. The indexing time includes both audio processing (generation of word and phonetic transcripts) and building of the searchable indices. Table 3: Processing resource profile. (HS: Hours of Speech. HP: Processing Hours. sec.P: Processing seconds) 4.5 Retrieval measures We compare our approach (WCN phonetic) presented in Section 4.1 with another approach (1-best-WCN phonetic). The only difference between these two approaches is that, in 1-best-WCN phonetic, we index only the 1-best path extracted from the WCN instead of indexing all the WCN. WCN phonetic was our primary system for the evaluation and 1-best-WCN phonetic was one of our contrastive systems. Average precision and recall, MTWV and ATWV on the 1100 queries are given in Table 4. We provide also the DET curve for WCN phonetic approach in Figure 2. The point that maximizes the TWV, the MTWV, is specified on each curve. Note that retrieval performance has been evaluated separately for each source type since the accuracy of the speech differs per source type as shown in Section 4.2. As expected, we can see that MTWV and ATWV decrease in higher WER. The retrieval performance is improved when Table 4: ATWV, MTWV, precision and recall per source type. Figure 2: DET curve for WCN phonetic approach. using WCNs relatively to 1-best path. It is due to the fact that miss probability is improved by indexing all the hypotheses provided by the WCNs. This observation confirms the results shown by Mamou et al. [17] in the context of spoken document retrieval. The ATWV that we have obtained is close to the MTWV; we have combined our ranking model with appropriate threshold θ to eliminate results with lower score. Therefore, the effect of false alarms added by WCNs is reduced. WCN phonetic approach was used in the recent NIST STD evaluation and received the highest overall ranking among eleven participants. For comparison, the system that ranked at the third place, obtained an ATWV of 0.8238 for BNEWS, 0.6652 for CTS and 0.1103 for CONFMTG. 4.6 Influence of the duration of the query on the retrieval performance We have analysed the retrieval performance according to the average duration of the occurrences in the manual transcripts. The query set was divided into three different quantiles according to the duration; we have reported in Table 5 ATWV and MTWV according to the duration. We can see that we performed better on longer queries. One of the reasons is the fact that the ASR system is more accurate on long words. Hence, it was justified to boost the score of the results with the exponent γn, as explained in Section 3.4.3, according to the length of the query. Table 5: ATWV, MTWV according to the duration of the query occurrences per source type. 4.7 OOV vs. IV query processing We have randomly chosen three sets of queries from the query sets provided by NIST: 50 queries containing only IV terms; 50 queries containing only OOV terms; and 50 hybrid queries containing both IV and OOV terms. The following experiment has been achieved on the BNEWS collection and IV and OOV terms has been determined according to the vocabulary of BNEWS ASR system. We would like to compare three different approaches of retrieval: using only word index; using only phonetic index; combining word and phonetic indices. Table 6 summarizes the retrieval performance according to each approach and to each type of queries. Using a word-based approach for dealing with OOV and hybrid queries affects drastically the performance of the retrieval; precision and recall are null. Using a phone-based approach for dealing with IV queries affects also the performance of the retrieval relatively to the word-based approach. As expected, the approach combining word and phonetic indices presented in Section 3 leads to the same retrieval performance as the word approach for IV queries and to the same retrieval performance as the phonetic approach for OOV queries. This approach always outperforms the others and it justifies the fact that we need to combine word and phonetic search. 5. RELATED WORK In the past decade, the research efforts on spoken data retrieval have focused on extending classical IR techniques to spoken documents. Some of these works have been done in the context of the TREC Spoken Document Retrieval evaluations and are described by Garofolo et al. [12]. An LVCSR system is used to transcribe the speech into 1-best path word transcripts. The transcripts are indexed as clean text: for each occurrence, its document, its word offset and additional information are stored in the index. A generic IR system over the text is used for word spotting and search as described by Brown et al. [6] and James [14]. This strat Table 6: Comparison of word and phonetic approach on IV and OOV queries egy works well for transcripts like broadcast news collections that have a low WER (in the range of 15% -30%) and are redundant by nature (the same piece of information is spoken several times in different manners). Moreover, the algorithms have been mostly tested over long queries stated in plain English and retrieval for such queries is more robust against speech recognition errors. An alternative approach consists of using word lattices in order to improve the effectiveness of SDR. Singhal et al. [24, 25] propose to add some terms to the transcript in order to alleviate the retrieval failures due to ASR errors. From an IR perspective, a classical way to bring new terms is document expansion using a similar corpus. Their approach consists in using word lattices in order to determine which words returned by a document expansion algorithm should be added to the original transcript. The necessity to use a document expansion algorithm was justified by the fact that the word lattices they worked with, lack information about word probabilities. Chelba and Acero in [8, 9] propose a more compact word lattice, the position specific posterior lattice (PSPL). This data structure is similar to WCN and leads to a more compact index. The offset of the terms in the speech documents is also stored in the index. However, the evaluation framework is carried out on lectures that are relatively planned, in contrast to conversational speech. Their ranking model is based on the term confidence level but does not take into consideration the rank of the term among the other hypotheses. Mamou et al. [17] propose a model for spoken document retrieval using WCNs in order to improve the recall and the MAP of the search. However, in the above works, the problem of queries containing OOV terms is not addressed. Popular approaches to deal with OOV queries are based on sub-words transcripts, where the sub-words are typically phones, syllables or word fragments (sequences of phones) [11, 20, 23]. The classical approach consists of using phonetic transcripts. The transcripts are indexed in the same manner as words in using classical text retrieval techniques; during query processing, the query is represented as a sequence of phones. The retrieval is based on searching the string of phones representing the query in the phonetic transcript. To account for the high recognition error rates, some other systems use richer transcripts like phonetic lattices. They are attractive as they accommodate high error rate conditions as well as allow for OOV queries to be used [15, 3, 20, 23, 21, 27]. However, phonetic lattices contain many edges that overlap in time with the same phonetic label, and are difficult to index. Moreover, beside the improvement in the recall of the search, the precision is affected since phonetic lattices are often inaccurate. Consequently, phonetic approaches should be used only for OOV search; for searching queries containing also IV terms, this technique affects the performance of the retrieval in comparison to the word based approach. Saraclar and Sproat in [22] show improvement in word spotting accuracy for both IV and OOV queries, using phonetic and word lattices, where a confidence measure of a word or a phone can be derived. They propose three different retrieval strategies: search both the word and the phonetic indices and unify the two different sets of results; search the word index for IV queries, search the phonetic index for OOV queries; search the word index and if no result is returned, search the phonetic index. However, no strategy is proposed to deal with phrase queries containing both IV and OOV terms. Amir et al. in [5, 4] propose to merge a word approach with a phonetic approach in the context of video retrieval. However, the phonetic transcript is obtained from a text to phonetic conversion of the 1-best path of the word transcript and is not based on a phonetic decoding of the speech data. An important issue to be considered when looking at the state-of-the-art in retrieval of spoken data, is the lack of a common test set and appropriate query terms. This paper uses such a task and the STD evaluation is a good summary of the performance of different approaches on the same test conditions. 6. CONCLUSIONS This work studies how vocabulary independent spoken term detection can be performed efficiently over different data sources. Previously, phonetic-based and word-based approaches have been used for IR on speech data. The former suffers from low accuracy and the latter from limited vocabulary of the recognition system. In this paper, we have presented a vocabulary independent model of indexing and search that combines both the approaches. The system can deal with all kinds of queries although the phrases that need to combine for the retrieval, information extracted from two different indices, a word index and a phonetic index. The scoring of OOV terms is based on the proximity (in time) between the different phones. The scoring of IV terms is based on information provided by the WCNs. We have shown an improvement in the retrieval performance when using all the WCN and not only the 1-best path and when using phonetic index for search of OOV query terms. This approach always outperforms the other approaches using only word index or phonetic index. As a future work, we will compare our model for OOV search on phonetic transcripts with a retrieval model based on the edit distance.
Vocabulary Independent Spoken Term Detection ABSTRACT We are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings. Today, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index. However, query terms that are not part of the recognizer's vocabulary cannot be retrieved, and the recall of the search is affected. In addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically. Such phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts. We present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts. A speech recognizer generates word confusion networks and phonetic lattices. The transcripts are indexed for query processing and ranking purpose. The value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1]. 1. INTRODUCTION The rapidly increasing amount of spoken data calls for solutions to index and search this data. The classical approach consists of converting the speech to word transcripts using a large vocabulary continuous speech recognition (LVCSR) tool. In the past decade, most of the research efforts on spoken data retrieval have focused on extending classical IR techniques to word transcripts. Some of these works have been done in the framework of the NIST TREC Spoken Document Retrieval tracks and are described by Garofolo et al. [12]. These tracks focused on retrieval from a corpus of broadcast news stories spoken by professionals. One of the conclusions of those tracks was that the effectiveness of retrieval mostly depends on the accuracy of the transcripts. While the accuracy of automatic speech recognition (ASR) systems depends on the scenario and environment, state-of-the-art systems achieved better than 90% accuracy in transcription of such data. In 2000, Garofolo et al. concluded that "Spoken document retrieval is a solved problem" [12]. However, a significant drawback of such approaches is that search on queries containing out-of-vocabulary (OOV) terms will not return any results. OOV terms are missing words from the ASR system vocabulary and are replaced in the output transcript by alternatives that are probable, given the recognition acoustic model and the language model. It has been experimentally observed that over 10% of user queries can contain OOV terms [16], as queries often relate to named entities that typically have a poor coverage in the ASR vocabulary. The effects of OOV query terms in spoken data retrieval are discussed by Woodland et al. [28]. In many applications the OOV rate may get worse over time unless the recognizer's vocabulary is periodically updated. Another approach consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones. The retrieval is based on searching the sequence of phones representing the query in the phonetic transcripts. The main drawback of this approach is the inherent high error rate of the transcripts. Therefore, such approach cannot be an alternative to word transcripts, especially for in-vocabulary (IV) query terms that are part of the vocabulary of the ASR system. A solution would be to combine the two different approaches presented above: we index both word transcripts and phonetic transcripts; during query processing, the information is retrieved from the word index for IV terms and from the phonetic index for OOV terms. We would like to be able to process also hybrid queries, i.e, queries that include both IV and OOV terms. Consequently, we need to merge pieces of information retrieved from word index and phonetic index. Proximity information on the occurrences of the query terms is required for phrase search and for proximity-based ranking. In classical IR, the index stores for each occurrence of a term, its offset. Therefore, we cannot merge posting lists retrieved by phonetic index with those retrieved by word index since the offset of the occurrences retrieved from the two different indices are not comparable. The only element of comparison between phonetic and word transcripts are the timestamps. No previous work combining word and phonetic approach has been done on phrase search. We present a novel scheme for information retrieval that consists of storing, during the indexing process, for each unit of indexing (phone or word) its timestamp. We search queries by merging the information retrieved from the two different indices, word index and phonetic index, according to the timestamps of the query terms. We analyze the retrieval effectiveness of this approach on the NIST Spoken Term Detection 2006 evaluation data [1]. The paper is organized as follows. We describe the audio processing in Section 2. The indexing and retrieval methods are presented in section 3. Experimental setup and results are given in Section 4. In Section 5, we give an overview of related work. Finally, we conclude in Section 6. 2. AUTOMATIC SPEECH RECOGNITION SYSTEM 3. RETRIEVAL MODEL 3.1 Spoken document detection task 3.2 Indexing 3.3 Search 3.4 Ranking 3.4.1 In vocabulary term ranking 3.4.2 Out of vocabulary term ranking 3.4.3 Combination 4. EXPERIMENTS 4.1 Experimental setup 4.2 WER analysis 4.3 Theta threshold BNEWS CTS CONFMTG 4.4 Processing resource profile 4.5 Retrieval measures 4.6 Influence of the duration of the query on the retrieval performance 4.7 OOV vs. IV query processing 5. RELATED WORK In the past decade, the research efforts on spoken data retrieval have focused on extending classical IR techniques to spoken documents. Some of these works have been done in the context of the TREC Spoken Document Retrieval evaluations and are described by Garofolo et al. [12]. An LVCSR system is used to transcribe the speech into 1-best path word transcripts. The transcripts are indexed as clean text: for each occurrence, its document, its word offset and additional information are stored in the index. A generic IR system over the text is used for word spotting and search as described by Brown et al. [6] and James [14]. This strat Table 6: Comparison of word and phonetic approach on IV and OOV queries egy works well for transcripts like broadcast news collections that have a low WER (in the range of 15% -30%) and are redundant by nature (the same piece of information is spoken several times in different manners). Moreover, the algorithms have been mostly tested over long queries stated in plain English and retrieval for such queries is more robust against speech recognition errors. An alternative approach consists of using word lattices in order to improve the effectiveness of SDR. Singhal et al. [24, 25] propose to add some terms to the transcript in order to alleviate the retrieval failures due to ASR errors. From an IR perspective, a classical way to bring new terms is document expansion using a similar corpus. Their approach consists in using word lattices in order to determine which words returned by a document expansion algorithm should be added to the original transcript. The necessity to use a document expansion algorithm was justified by the fact that the word lattices they worked with, lack information about word probabilities. Chelba and Acero in [8, 9] propose a more compact word lattice, the position specific posterior lattice (PSPL). This data structure is similar to WCN and leads to a more compact index. The offset of the terms in the speech documents is also stored in the index. However, the evaluation framework is carried out on lectures that are relatively planned, in contrast to conversational speech. Their ranking model is based on the term confidence level but does not take into consideration the rank of the term among the other hypotheses. Mamou et al. [17] propose a model for spoken document retrieval using WCNs in order to improve the recall and the MAP of the search. However, in the above works, the problem of queries containing OOV terms is not addressed. Popular approaches to deal with OOV queries are based on sub-words transcripts, where the sub-words are typically phones, syllables or word fragments (sequences of phones) [11, 20, 23]. The classical approach consists of using phonetic transcripts. The transcripts are indexed in the same manner as words in using classical text retrieval techniques; during query processing, the query is represented as a sequence of phones. The retrieval is based on searching the string of phones representing the query in the phonetic transcript. To account for the high recognition error rates, some other systems use richer transcripts like phonetic lattices. They are attractive as they accommodate high error rate conditions as well as allow for OOV queries to be used [15, 3, 20, 23, 21, 27]. However, phonetic lattices contain many edges that overlap in time with the same phonetic label, and are difficult to index. Moreover, beside the improvement in the recall of the search, the precision is affected since phonetic lattices are often inaccurate. Consequently, phonetic approaches should be used only for OOV search; for searching queries containing also IV terms, this technique affects the performance of the retrieval in comparison to the word based approach. Saraclar and Sproat in [22] show improvement in word spotting accuracy for both IV and OOV queries, using phonetic and word lattices, where a confidence measure of a word or a phone can be derived. They propose three different retrieval strategies: search both the word and the phonetic indices and unify the two different sets of results; search the word index for IV queries, search the phonetic index for OOV queries; search the word index and if no result is returned, search the phonetic index. However, no strategy is proposed to deal with phrase queries containing both IV and OOV terms. Amir et al. in [5, 4] propose to merge a word approach with a phonetic approach in the context of video retrieval. However, the phonetic transcript is obtained from a text to phonetic conversion of the 1-best path of the word transcript and is not based on a phonetic decoding of the speech data. An important issue to be considered when looking at the state-of-the-art in retrieval of spoken data, is the lack of a common test set and appropriate query terms. This paper uses such a task and the STD evaluation is a good summary of the performance of different approaches on the same test conditions. 6. CONCLUSIONS This work studies how vocabulary independent spoken term detection can be performed efficiently over different data sources. Previously, phonetic-based and word-based approaches have been used for IR on speech data. The former suffers from low accuracy and the latter from limited vocabulary of the recognition system. In this paper, we have presented a vocabulary independent model of indexing and search that combines both the approaches. The system can deal with all kinds of queries although the phrases that need to combine for the retrieval, information extracted from two different indices, a word index and a phonetic index. The scoring of OOV terms is based on the proximity (in time) between the different phones. The scoring of IV terms is based on information provided by the WCNs. We have shown an improvement in the retrieval performance when using all the WCN and not only the 1-best path and when using phonetic index for search of OOV query terms. This approach always outperforms the other approaches using only word index or phonetic index. As a future work, we will compare our model for OOV search on phonetic transcripts with a retrieval model based on the edit distance.
Vocabulary Independent Spoken Term Detection ABSTRACT We are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings. Today, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index. However, query terms that are not part of the recognizer's vocabulary cannot be retrieved, and the recall of the search is affected. In addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically. Such phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts. We present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts. A speech recognizer generates word confusion networks and phonetic lattices. The transcripts are indexed for query processing and ranking purpose. The value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1]. 1. INTRODUCTION The rapidly increasing amount of spoken data calls for solutions to index and search this data. The classical approach consists of converting the speech to word transcripts using a large vocabulary continuous speech recognition (LVCSR) tool. In the past decade, most of the research efforts on spoken data retrieval have focused on extending classical IR techniques to word transcripts. Some of these works have been done in the framework of the NIST TREC Spoken Document Retrieval tracks and are described by Garofolo et al. [12]. These tracks focused on retrieval from a corpus of broadcast news stories spoken by professionals. One of the conclusions of those tracks was that the effectiveness of retrieval mostly depends on the accuracy of the transcripts. In 2000, Garofolo et al. concluded that "Spoken document retrieval is a solved problem" [12]. However, a significant drawback of such approaches is that search on queries containing out-of-vocabulary (OOV) terms will not return any results. OOV terms are missing words from the ASR system vocabulary and are replaced in the output transcript by alternatives that are probable, given the recognition acoustic model and the language model. The effects of OOV query terms in spoken data retrieval are discussed by Woodland et al. [28]. Another approach consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones. The retrieval is based on searching the sequence of phones representing the query in the phonetic transcripts. The main drawback of this approach is the inherent high error rate of the transcripts. Therefore, such approach cannot be an alternative to word transcripts, especially for in-vocabulary (IV) query terms that are part of the vocabulary of the ASR system. A solution would be to combine the two different approaches presented above: we index both word transcripts and phonetic transcripts; during query processing, the information is retrieved from the word index for IV terms and from the phonetic index for OOV terms. We would like to be able to process also hybrid queries, i.e, queries that include both IV and OOV terms. Consequently, we need to merge pieces of information retrieved from word index and phonetic index. Proximity information on the occurrences of the query terms is required for phrase search and for proximity-based ranking. In classical IR, the index stores for each occurrence of a term, its offset. Therefore, we cannot merge posting lists retrieved by phonetic index with those retrieved by word index since the offset of the occurrences retrieved from the two different indices are not comparable. The only element of comparison between phonetic and word transcripts are the timestamps. No previous work combining word and phonetic approach has been done on phrase search. We present a novel scheme for information retrieval that consists of storing, during the indexing process, for each unit of indexing (phone or word) its timestamp. We search queries by merging the information retrieved from the two different indices, word index and phonetic index, according to the timestamps of the query terms. We analyze the retrieval effectiveness of this approach on the NIST Spoken Term Detection 2006 evaluation data [1]. We describe the audio processing in Section 2. The indexing and retrieval methods are presented in section 3. Experimental setup and results are given in Section 4. In Section 5, we give an overview of related work. Finally, we conclude in Section 6. 5. RELATED WORK In the past decade, the research efforts on spoken data retrieval have focused on extending classical IR techniques to spoken documents. Some of these works have been done in the context of the TREC Spoken Document Retrieval evaluations and are described by Garofolo et al. [12]. An LVCSR system is used to transcribe the speech into 1-best path word transcripts. The transcripts are indexed as clean text: for each occurrence, its document, its word offset and additional information are stored in the index. A generic IR system over the text is used for word spotting and search as described by Brown et al. [6] and James [14]. This strat Table 6: Comparison of word and phonetic approach on IV and OOV queries Moreover, the algorithms have been mostly tested over long queries stated in plain English and retrieval for such queries is more robust against speech recognition errors. An alternative approach consists of using word lattices in order to improve the effectiveness of SDR. Singhal et al. [24, 25] propose to add some terms to the transcript in order to alleviate the retrieval failures due to ASR errors. From an IR perspective, a classical way to bring new terms is document expansion using a similar corpus. Their approach consists in using word lattices in order to determine which words returned by a document expansion algorithm should be added to the original transcript. The necessity to use a document expansion algorithm was justified by the fact that the word lattices they worked with, lack information about word probabilities. This data structure is similar to WCN and leads to a more compact index. The offset of the terms in the speech documents is also stored in the index. Mamou et al. [17] propose a model for spoken document retrieval using WCNs in order to improve the recall and the MAP of the search. However, in the above works, the problem of queries containing OOV terms is not addressed. Popular approaches to deal with OOV queries are based on sub-words transcripts, where the sub-words are typically phones, syllables or word fragments (sequences of phones) [11, 20, 23]. The classical approach consists of using phonetic transcripts. The transcripts are indexed in the same manner as words in using classical text retrieval techniques; during query processing, the query is represented as a sequence of phones. The retrieval is based on searching the string of phones representing the query in the phonetic transcript. To account for the high recognition error rates, some other systems use richer transcripts like phonetic lattices. However, phonetic lattices contain many edges that overlap in time with the same phonetic label, and are difficult to index. Moreover, beside the improvement in the recall of the search, the precision is affected since phonetic lattices are often inaccurate. Consequently, phonetic approaches should be used only for OOV search; for searching queries containing also IV terms, this technique affects the performance of the retrieval in comparison to the word based approach. However, no strategy is proposed to deal with phrase queries containing both IV and OOV terms. Amir et al. in [5, 4] propose to merge a word approach with a phonetic approach in the context of video retrieval. However, the phonetic transcript is obtained from a text to phonetic conversion of the 1-best path of the word transcript and is not based on a phonetic decoding of the speech data. An important issue to be considered when looking at the state-of-the-art in retrieval of spoken data, is the lack of a common test set and appropriate query terms. 6. CONCLUSIONS This work studies how vocabulary independent spoken term detection can be performed efficiently over different data sources. Previously, phonetic-based and word-based approaches have been used for IR on speech data. The former suffers from low accuracy and the latter from limited vocabulary of the recognition system. In this paper, we have presented a vocabulary independent model of indexing and search that combines both the approaches. The system can deal with all kinds of queries although the phrases that need to combine for the retrieval, information extracted from two different indices, a word index and a phonetic index. The scoring of OOV terms is based on the proximity (in time) between the different phones. The scoring of IV terms is based on information provided by the WCNs. We have shown an improvement in the retrieval performance when using all the WCN and not only the 1-best path and when using phonetic index for search of OOV query terms. This approach always outperforms the other approaches using only word index or phonetic index. As a future work, we will compare our model for OOV search on phonetic transcripts with a retrieval model based on the edit distance.
H-35
AdaRank: A Boosting Algorithm for Information Retrieval
In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs ‘weak rankers' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.
[ "boost", "inform retriev", "learn to rank", "document retriev", "rank model", "train rank model", "rankboost", "novel learn algorithm", "weak ranker", "re-weight train data", "train process", "machin learn", "support vector machin", "new learn algorithm", "rank model tune" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U", "M", "M" ]
AdaRank: A Boosting Algorithm for Information Retrieval Jun Xu Microsoft Research Asia No. 49 Zhichun Road, Haidian Distinct Beijing, China 100080 junxu@microsoft.com Hang Li Microsoft Research Asia No. 49 Zhichun Road, Haidian Distinct Beijing, China 100080 hangli@microsoft.com ABSTRACT In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs `weak rankers'' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models General Terms Algorithms, Experimentation, Theory 1. INTRODUCTION Recently `learning to rank'' has gained increasing attention in both the fields of information retrieval and machine learning. When applied to document retrieval, learning to rank becomes a task as follows. In training, a ranking model is constructed with data consisting of queries, their corresponding retrieved documents, and relevance levels given by humans. In ranking, given a new query, the corresponding retrieved documents are sorted by using the trained ranking model. In document retrieval, usually ranking results are evaluated in terms of performance measures such as MAP (Mean Average Precision) [1] and NDCG (Normalized Discounted Cumulative Gain) [15]. Ideally, the ranking function is created so that the accuracy of ranking in terms of one of the measures with respect to the training data is maximized. Several methods for learning to rank have been developed and applied to document retrieval. For example, Herbrich et al. [13] propose a learning algorithm for ranking on the basis of Support Vector Machines, called Ranking SVM. Freund et al. [8] take a similar approach and perform the learning by using boosting, referred to as RankBoost. All the existing methods used for document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss functions loosely related to the IR performance measures, not loss functions directly based on the measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. In this paper, we aim to develop a new learning algorithm that can directly optimize any performance measure used in document retrieval. Inspired by the work of AdaBoost for classification [9], we propose to develop a boosting algorithm for information retrieval, referred to as AdaRank. AdaRank utilizes a linear combination of `weak rankers'' as its model. In learning, it repeats the process of re-weighting the training sample, creating a weak ranker, and calculating a weight for the ranker. We show that AdaRank algorithm can iteratively optimize an exponential loss function based on any of IR performance measures. A lower bound of the performance on training data is given, which indicates that the ranking accuracy in terms of the performance measure can be continuously improved during the training process. AdaRank offers several advantages: ease in implementation, theoretical soundness, efficiency in training, and high accuracy in ranking. Experimental results indicate that AdaRank can outperform the baseline methods of BM25, Ranking SVM, and RankBoost, on four benchmark datasets including OHSUMED, WSJ, AP, and . Gov. Tuning ranking models using certain training data and a performance measure is a common practice in IR [1]. As the number of features in the ranking model gets larger and the amount of training data gets larger, the tuning becomes harder. From the viewpoint of IR, AdaRank can be viewed as a machine learning method for ranking model tuning. Recently, direct optimization of performance measures in learning has become a hot research topic. Several methods for classification [17] and ranking [5, 19] have been proposed. AdaRank can be viewed as a machine learning method for direct optimization of performance measures, based on a different approach. The rest of the paper is organized as follows. After a summary of related work in Section 2, we describe the proposed AdaRank algorithm in details in Section 3. Experimental results and discussions are given in Section 4. Section 5 concludes this paper and gives future work. 2. RELATED WORK 2.1 Information Retrieval The key problem for document retrieval is ranking, specifically, how to create the ranking model (function) that can sort documents based on their relevance to the given query. It is a common practice in IR to tune the parameters of a ranking model using some labeled data and one performance measure [1]. For example, the state-ofthe-art methods of BM25 [24] and LMIR (Language Models for Information Retrieval) [18, 22] all have parameters to tune. As the ranking models become more sophisticated (more features are used) and more labeled data become available, how to tune or train ranking models turns out to be a challenging issue. Recently methods of `learning to rank'' have been applied to ranking model construction and some promising results have been obtained. For example, Joachims [16] applies Ranking SVM to document retrieval. He utilizes click-through data to deduce training data for the model creation. Cao et al. [4] adapt Ranking SVM to document retrieval by modifying the Hinge Loss function to better meet the requirements of IR. Specifically, they introduce a Hinge Loss function that heavily penalizes errors on the tops of ranking lists and errors from queries with fewer retrieved documents. Burges et al. [3] employ Relative Entropy as a loss function and Gradient Descent as an algorithm to train a Neural Network model for ranking in document retrieval. The method is referred to as `RankNet''. 2.2 Machine Learning There are three topics in machine learning which are related to our current work. They are `learning to rank'', boosting, and direct optimization of performance measures. Learning to rank is to automatically create a ranking function that assigns scores to instances and then rank the instances by using the scores. Several approaches have been proposed to tackle the problem. One major approach to learning to rank is that of transforming it into binary classification on instance pairs. This `pair-wise'' approach fits well with information retrieval and thus is widely used in IR. Typical methods of the approach include Ranking SVM [13], RankBoost [8], and RankNet [3]. For other approaches to learning to rank, refer to [2, 11, 31]. In the pair-wise approach to ranking, the learning task is formalized as a problem of classifying instance pairs into two categories (correctly ranked and incorrectly ranked). Actually, it is known that reducing classification errors on instance pairs is equivalent to maximizing a lower bound of MAP [16]. In that sense, the existing methods of Ranking SVM, RankBoost, and RankNet are only able to minimize loss functions that are loosely related to the IR performance measures. Boosting is a general technique for improving the accuracies of machine learning algorithms. The basic idea of boosting is to repeatedly construct `weak learners'' by re-weighting training data and form an ensemble of weak learners such that the total performance of the ensemble is `boosted''. Freund and Schapire have proposed the first well-known boosting algorithm called AdaBoost (Adaptive Boosting) [9], which is designed for binary classification (0-1 prediction). Later, Schapire & Singer have introduced a generalized version of AdaBoost in which weak learners can give confidence scores in their predictions rather than make 0-1 decisions [26]. Extensions have been made to deal with the problems of multi-class classification [10, 26], regression [7], and ranking [8]. In fact, AdaBoost is an algorithm that ingeniously constructs a linear model by minimizing the `exponential loss function'' with respect to the training data [26]. Our work in this paper can be viewed as a boosting method developed for ranking, particularly for ranking in IR. Recently, a number of authors have proposed conducting direct optimization of multivariate performance measures in learning. For instance, Joachims [17] presents an SVM method to directly optimize nonlinear multivariate performance measures like the F1 measure for classification. Cossock & Zhang [5] find a way to approximately optimize the ranking performance measure DCG [15]. Metzler et al. [19] also propose a method of directly maximizing rank-based metrics for ranking on the basis of manifold learning. AdaRank is also one that tries to directly optimize multivariate performance measures, but is based on a different approach. AdaRank is unique in that it employs an exponential loss function based on IR performance measures and a boosting technique. 3. OUR METHOD: ADARANK 3.1 General Framework We first describe the general framework of learning to rank for document retrieval. In retrieval (testing), given a query the system returns a ranking list of documents in descending order of the relevance scores. The relevance scores are calculated with a ranking function (model). In learning (training), a number of queries and their corresponding retrieved documents are given. Furthermore, the relevance levels of the documents with respect to the queries are also provided. The relevance levels are represented as ranks (i.e., categories in a total order). The objective of learning is to construct a ranking function which achieves the best results in ranking of the training data in the sense of minimization of a loss function. Ideally the loss function is defined on the basis of the performance measure used in testing. Suppose that Y = {r1, r2, · · · , r } is a set of ranks, where denotes the number of ranks. There exists a total order between the ranks r r −1 · · · r1, where ` '' denotes a preference relationship. In training, a set of queries Q = {q1, q2, · · · , qm} is given. Each query qi is associated with a list of retrieved documents di = {di1, di2, · · · , di,n(qi)} and a list of labels yi = {yi1, yi2, · · · , yi,n(qi)}, where n(qi) denotes the sizes of lists di and yi, dij denotes the jth document in di, and yij ∈ Y denotes the rank of document di j. A feature vector xij = Ψ(qi, di j) ∈ X is created from each query-document pair (qi, di j), i = 1, 2, · · · , m; j = 1, 2, · · · , n(qi). Thus, the training set can be represented as S = {(qi, di, yi)}m i=1. The objective of learning is to create a ranking function f : X → , such that for each query the elements in its corresponding document list can be assigned relevance scores using the function and then be ranked according to the scores. Specifically, we create a permutation of integers π(qi, di, f) for query qi, the corresponding list of documents di, and the ranking function f. Let di = {di1, di2, · · · , di,n(qi)} be identified by the list of integers {1, 2, · · · , n(qi)}, then permutation π(qi, di, f) is defined as a bijection from {1, 2, · · · , n(qi)} to itself. We use π( j) to denote the position of item j (i.e., di j). The learning process turns out to be that of minimizing the loss function which represents the disagreement between the permutation π(qi, di, f) and the list of ranks yi, for all of the queries. Table 1: Notations and explanations. Notations Explanations qi ∈ Q ith query di = {di1, di2, · · · , di,n(qi)} List of documents for qi yi j ∈ {r1, r2, · · · , r } Rank of di j w.r.t. qi yi = {yi1, yi2, · · · , yi,n(qi)} List of ranks for qi S = {(qi, di, yi)}m i=1 Training set xij = Ψ(qi, dij) ∈ X Feature vector for (qi, di j) f(xij) ∈ Ranking model π(qi, di, f) Permutation for qi, di, and f ht(xi j) ∈ tth weak ranker E(π(qi, di, f), yi) ∈ [−1, +1] Performance measure function In the paper, we define the rank model as a linear combination of weak rankers: f(x) = T t=1 αtht(x), where ht(x) is a weak ranker, αt is its weight, and T is the number of weak rankers. In information retrieval, query-based performance measures are used to evaluate the `goodness'' of a ranking function. By query based measure, we mean a measure defined over a ranking list of documents with respect to a query. These measures include MAP, NDCG, MRR (Mean Reciprocal Rank), WTA (Winners Take ALL), and Precision@n [1, 15]. We utilize a general function E(π(qi, di, f), yi) ∈ [−1, +1] to represent the performance measures. The first argument of E is the permutation π created using the ranking function f on di. The second argument is the list of ranks yi given by humans. E measures the agreement between π and yi. Table 1 gives a summary of notations described above. Next, as examples of performance measures, we present the definitions of MAP and NDCG. Given a query qi, the corresponding list of ranks yi, and a permutation πi on di, average precision for qi is defined as: AvgPi = n(qi) j=1 Pi( j) · yij n(qi) j=1 yij , (1) where yij takes on 1 and 0 as values, representing being relevant or irrelevant and Pi( j) is defined as precision at the position of dij: Pi( j) = k:πi(k)≤πi(j) yik πi(j) , (2) where πi( j) denotes the position of di j. Given a query qi, the list of ranks yi, and a permutation πi on di, NDCG at position m for qi is defined as: Ni = ni · j:πi(j)≤m 2yi j − 1 log(1 + πi( j)) , (3) where yij takes on ranks as values and ni is a normalization constant. ni is chosen so that a perfect ranking π∗ i ``s NDCG score at position m is 1. 3.2 Algorithm Inspired by the AdaBoost algorithm for classification, we have devised a novel algorithm which can optimize a loss function based on the IR performance measures. The algorithm is referred to as `AdaRank'' and is shown in Figure 1. AdaRank takes a training set S = {(qi, di, yi)}m i=1 as input and takes the performance measure function E and the number of iterations T as parameters. AdaRank runs T rounds and at each round it creates a weak ranker ht(t = 1, · · · , T). Finally, it outputs a ranking model f by linearly combining the weak rankers. At each round, AdaRank maintains a distribution of weights over the queries in the training data. We denote the distribution of weights Input: S = {(qi, di, yi)}m i=1, and parameters E and T Initialize P1(i) = 1/m. For t = 1, · · · , T • Create weak ranker ht with weighted distribution Pt on training data S . • Choose αt αt = 1 2 · ln m i=1 Pt(i){1 + E(π(qi, di, ht), yi)} m i=1 Pt(i){1 − E(π(qi, di, ht), yi)} . • Create ft ft(x) = t k=1 αkhk(x). • Update Pt+1 Pt+1(i) = exp{−E(π(qi, di, ft), yi)} m j=1 exp{−E(π(qj, dj, ft), yj)} . End For Output ranking model: f(x) = fT (x). Figure 1: The AdaRank algorithm. at round t as Pt and the weight on the ith training query qi at round t as Pt(i). Initially, AdaRank sets equal weights to the queries. At each round, it increases the weights of those queries that are not ranked well by ft, the model created so far. As a result, the learning at the next round will be focused on the creation of a weak ranker that can work on the ranking of those `hard'' queries. At each round, a weak ranker ht is constructed based on training data with weight distribution Pt. The goodness of a weak ranker is measured by the performance measure E weighted by Pt: m i=1 Pt(i)E(π(qi, di, ht), yi). Several methods for weak ranker construction can be considered. For example, a weak ranker can be created by using a subset of queries (together with their document list and label list) sampled according to the distribution Pt. In this paper, we use single features as weak rankers, as will be explained in Section 3.6. Once a weak ranker ht is built, AdaRank chooses a weight αt > 0 for the weak ranker. Intuitively, αt measures the importance of ht. A ranking model ft is created at each round by linearly combining the weak rankers constructed so far h1, · · · , ht with weights α1, · · · , αt. ft is then used for updating the distribution Pt+1. 3.3 Theoretical Analysis The existing learning algorithms for ranking attempt to minimize a loss function based on instance pairs (document pairs). In contrast, AdaRank tries to optimize a loss function based on queries. Furthermore, the loss function in AdaRank is defined on the basis of general IR performance measures. The measures can be MAP, NDCG, WTA, MRR, or any other measures whose range is within [−1, +1]. We next explain why this is the case. Ideally we want to maximize the ranking accuracy in terms of a performance measure on the training data: max f∈F m i=1 E(π(qi, di, f), yi), (4) where F is the set of possible ranking functions. This is equivalent to minimizing the loss on the training data min f∈F m i=1 (1 − E(π(qi, di, f), yi)). (5) It is difficult to directly optimize the loss, because E is a noncontinuous function and thus may be difficult to handle. We instead attempt to minimize an upper bound of the loss in (5) min f∈F m i=1 exp{−E(π(qi, di, f), yi)}, (6) because e−x ≥ 1 − x holds for any x ∈ . We consider the use of a linear combination of weak rankers as our ranking model: f(x) = T t=1 αtht(x). (7) The minimization in (6) then turns out to be min ht∈H,αt∈ + L(ht, αt) = m i=1 exp{−E(π(qi, di, ft−1 + αtht), yi)}, (8) where H is the set of possible weak rankers, αt is a positive weight, and ( ft−1 + αtht)(x) = ft−1(x) + αtht(x). Several ways of computing coefficients αt and weak rankers ht may be considered. Following the idea of AdaBoost, in AdaRank we take the approach of `forward stage-wise additive modeling'' [12] and get the algorithm in Figure 1. It can be proved that there exists a lower bound on the ranking accuracy for AdaRank on training data, as presented in Theorem 1. T 1. The following bound holds on the ranking accuracy of the AdaRank algorithm on training data: 1 m m i=1 E(π(qi, di, fT ), yi) ≥ 1 − T t=1 e−δt min 1 − ϕ(t)2, where ϕ(t) = m i=1 Pt(i)E(π(qi, di, ht), yi), δt min = mini=1,··· ,m δt i, and δt i = E(π(qi, di, ft−1 + αtht), yi) − E(π(qi, di, ft−1), yi) −αtE(π(qi, di, ht), yi), for all i = 1, 2, · · · , m and t = 1, 2, · · · , T. A proof of the theorem can be found in appendix. The theorem implies that the ranking accuracy in terms of the performance measure can be continuously improved, as long as e−δt min 1 − ϕ(t)2 < 1 holds. 3.4 Advantages AdaRank is a simple yet powerful method. More importantly, it is a method that can be justified from the theoretical viewpoint, as discussed above. In addition AdaRank has several other advantages when compared with the existing learning to rank methods such as Ranking SVM, RankBoost, and RankNet. First, AdaRank can incorporate any performance measure, provided that the measure is query based and in the range of [−1, +1]. Notice that the major IR measures meet this requirement. In contrast the existing methods only minimize loss functions that are loosely related to the IR measures [16]. Second, the learning process of AdaRank is more efficient than those of the existing learning algorithms. The time complexity of AdaRank is of order O((k+T)·m·n log n), where k denotes the number of features, T the number of rounds, m the number of queries in training data, and n is the maximum number of documents for queries in training data. The time complexity of RankBoost, for example, is of order O(T · m · n2 ) [8]. Third, AdaRank employs a more reasonable framework for performing the ranking task than the existing methods. Specifically in AdaRank the instances correspond to queries, while in the existing methods the instances correspond to document pairs. As a result, AdaRank does not have the following shortcomings that plague the existing methods. (a) The existing methods have to make a strong assumption that the document pairs from the same query are independently distributed. In reality, this is clearly not the case and this problem does not exist for AdaRank. (b) Ranking the most relevant documents on the tops of document lists is crucial for document retrieval. The existing methods cannot focus on the training on the tops, as indicated in [4]. Several methods for rectifying the problem have been proposed (e.g., [4]), however, they do not seem to fundamentally solve the problem. In contrast, AdaRank can naturally focus on training on the tops of document lists, because the performance measures used favor rankings for which relevant documents are on the tops. (c) In the existing methods, the numbers of document pairs vary from query to query, resulting in creating models biased toward queries with more document pairs, as pointed out in [4]. AdaRank does not have this drawback, because it treats queries rather than document pairs as basic units in learning. 3.5 Differences from AdaBoost AdaRank is a boosting algorithm. In that sense, it is similar to AdaBoost, but it also has several striking differences from AdaBoost. First, the types of instances are different. AdaRank makes use of queries and their corresponding document lists as instances. The labels in training data are lists of ranks (relevance levels). AdaBoost makes use of feature vectors as instances. The labels in training data are simply +1 and −1. Second, the performance measures are different. In AdaRank, the performance measure is a generic measure, defined on the document list and the rank list of a query. In AdaBoost the corresponding performance measure is a specific measure for binary classification, also referred to as `margin'' [25]. Third, the ways of updating weights are also different. In AdaBoost, the distribution of weights on training instances is calculated according to the current distribution and the performance of the current weak learner. In AdaRank, in contrast, it is calculated according to the performance of the ranking model created so far, as shown in Figure 1. Note that AdaBoost can also adopt the weight updating method used in AdaRank. For AdaBoost they are equivalent (cf., [12] page 305). However, this is not true for AdaRank. 3.6 Construction of Weak Ranker We consider an efficient implementation for weak ranker construction, which is also used in our experiments. In the implementation, as weak ranker we choose the feature that has the optimal weighted performance among all of the features: max k m i=1 Pt(i)E(π(qi, di, xk), yi). Creating weak rankers in this way, the learning process turns out to be that of repeatedly selecting features and linearly combining the selected features. Note that features which are not selected in the training phase will have a weight of zero. 4. EXPERIMENTAL RESULTS We conducted experiments to test the performances of AdaRank using four benchmark datasets: OHSUMED, WSJ, AP, and . Gov. Table 2: Features used in the experiments on OHSUMED, WSJ, and AP datasets. C(w, d) represents frequency of word w in document d; C represents the entire collection; n denotes number of terms in query; | · | denotes the size function; and id f(·) denotes inverse document frequency. 1 wi∈q d ln(c(wi, d) + 1) 2 wi∈q d ln( |C| c(wi,C) + 1) 3 wi∈q d ln(id f(wi)) 4 wi∈q d ln(c(wi,d) |d| + 1) 5 wi∈q d ln(c(wi,d) |d| · id f(wi) + 1) 6 wi∈q d ln(c(wi,d)·|C| |d|·c(wi,C) + 1) 7 ln(BM25 score) 0.2 0.3 0.4 0.5 0.6 MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10 BM25 Ranking SVM RarnkBoost AdaRank.MAP AdaRank.NDCG Figure 2: Ranking accuracies on OHSUMED data. 4.1 Experiment Setting Ranking SVM [13, 16] and RankBoost [8] were selected as baselines in the experiments, because they are the state-of-the-art learning to rank methods. Furthermore, BM25 [24] was used as a baseline, representing the state-of-the-arts IR method (we actually used the tool Lemur1 ). For AdaRank, the parameter T was determined automatically during each experiment. Specifically, when there is no improvement in ranking accuracy in terms of the performance measure, the iteration stops (and T is determined). As the measure E, MAP and NDCG@5 were utilized. The results for AdaRank using MAP and NDCG@5 as measures in training are represented as AdaRank.MAP and AdaRank.NDCG, respectively. 4.2 Experiment with OHSUMED Data In this experiment, we made use of the OHSUMED dataset [14] to test the performances of AdaRank. The OHSUMED dataset consists of 348,566 documents and 106 queries. There are in total 16,140 query-document pairs upon which relevance judgments are made. The relevance judgments are either `d'' (definitely relevant), `p'' (possibly relevant), or `n''(not relevant). The data have been used in many experiments in IR, for example [4, 29]. As features, we adopted those used in document retrieval [4]. Table 2 shows the features. For example, tf (term frequency), idf (inverse document frequency), dl (document length), and combinations of them are defined as features. BM25 score itself is also a feature. Stop words were removed and stemming was conducted in the data. We randomly divided queries into four even subsets and conducted 4-fold cross-validation experiments. We tuned the parameters for BM25 during one of the trials and applied them to the other trials. The results reported in Figure 2 are those averaged over four trials. In MAP calculation, we define the rank `d'' as relevant and 1 http://www.lemurproject.com Table 3: Statistics on WSJ and AP datasets. Dataset # queries # retrieved docs # docs per query AP 116 24,727 213.16 WSJ 126 40,230 319.29 0.40 0.45 0.50 0.55 0.60 MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10 BM25 Ranking SVM RankBoost AdaRank.MAP AdaRank.NDCG Figure 3: Ranking accuracies on WSJ dataset. the other two ranks as irrelevant. From Figure 2, we see that both AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking SVM, and RankBoost in terms of all measures. We conducted significant tests (t-test) on the improvements of AdaRank.MAP over BM25, Ranking SVM, and RankBoost in terms of MAP. The results indicate that all the improvements are statistically significant (p-value < 0.05). We also conducted t-test on the improvements of AdaRank.NDCG over BM25, Ranking SVM, and RankBoost in terms of NDCG@5. The improvements are also statistically significant. 4.3 Experiment with WSJ and AP Data In this experiment, we made use of the WSJ and AP datasets from the TREC ad-hoc retrieval track, to test the performances of AdaRank. WSJ contains 74,520 articles of Wall Street Journals from 1990 to 1992, and AP contains 158,240 articles of Associated Press in 1988 and 1990. 200 queries are selected from the TREC topics (No.101 ∼ No.300). Each query has a number of documents associated and they are labeled as `relevant'' or `irrelevant'' (to the query). Following the practice in [28], the queries that have less than 10 relevant documents were discarded. Table 3 shows the statistics on the two datasets. In the same way as in section 4.2, we adopted the features listed in Table 2 for ranking. We also conducted 4-fold cross-validation experiments. The results reported in Figure 3 and 4 are those averaged over four trials on WSJ and AP datasets, respectively. From Figure 3 and 4, we can see that AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking SVM, and RankBoost in terms of all measures on both WSJ and AP. We conducted t-tests on the improvements of AdaRank.MAP and AdaRank.NDCG over BM25, Ranking SVM, and RankBoost on WSJ and AP. The results indicate that all the improvements in terms of MAP are statistically significant (p-value < 0.05). However only some of the improvements in terms of NDCG@5 are statistically significant, although overall the improvements on NDCG scores are quite high (1-2 points). 4.4 Experiment with . Gov Data In this experiment, we further made use of the TREC . Gov data to test the performance of AdaRank for the task of web retrieval. The corpus is a crawl from the . gov domain in early 2002, and has been used at TREC Web Track since 2002. There are a total 0.40 0.45 0.50 0.55 MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10 BM25 Ranking SVM RankBoost AdaRank.MAP AdaRank.NDCG Figure 4: Ranking accuracies on AP dataset. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10 BM25 Ranking SVM RankBoost AdaRank.MAP AdaRank.NDCG Figure 5: Ranking accuracies on . Gov dataset. Table 4: Features used in the experiments on . Gov dataset. 1 BM25 [24] 2 MSRA1000 [27] 3 PageRank [21] 4 HostRank [30] 5 Relevance Propagation [23] (10 features) of 1,053,110 web pages with 11,164,829 hyperlinks in the data. The 50 queries in the topic distillation task in the Web Track of TREC 2003 [6] were used. The ground truths for the queries are provided by the TREC committee with binary judgment: relevant or irrelevant. The number of relevant pages vary from query to query (from 1 to 86). We extracted 14 features from each query-document pair. Table 4 gives a list of the features. They are the outputs of some well-known algorithms (systems). These features are different from those in Table 2, because the task is different. Again, we conducted 4-fold cross-validation experiments. The results averaged over four trials are reported in Figure 5. From the results, we can see that AdaRank.MAP and AdaRank.NDCG outperform all the baselines in terms of all measures. We conducted ttests on the improvements of AdaRank.MAP and AdaRank.NDCG over BM25, Ranking SVM, and RankBoost. Some of the improvements are not statistically significant. This is because we have only 50 queries used in the experiments, and the number of queries is too small. 4.5 Discussions We investigated the reasons that AdaRank outperforms the baseline methods, using the results of the OHSUMED dataset as examples. First, we examined the reason that AdaRank has higher performances than Ranking SVM and RankBoost. Specifically we com0.58 0.60 0.62 0.64 0.66 0.68 d-n d-p p-n accuracy pair type Ranking SVM RankBoost AdaRank.MAP AdaRank.NDCG Figure 6: Accuracy on ranking document pairs with OHSUMED dataset. 0 2 4 6 8 10 12 numberofqueries number of document pairs per query Figure 7: Distribution of queries with different number of document pairs in training data of trial 1. pared the error rates between different rank pairs made by Ranking SVM, RankBoost, AdaRank.MAP, and AdaRank.NDCG on the test data. The results averaged over four trials in the 4-fold cross validation are shown in Figure 6. We use `d-n'' to stand for the pairs between `definitely relevant'' and `not relevant'', `d-p'' the pairs between `definitely relevant'' and `partially relevant'', and `p-n'' the pairs between `partially relevant'' and `not relevant''. From Figure 6, we can see that AdaRank.MAP and AdaRank.NDCG make fewer errors for `d-n'' and `d-p'', which are related to the tops of rankings and are important. This is because AdaRank.MAP and AdaRank.NDCG can naturally focus upon the training on the tops by optimizing MAP and NDCG@5, respectively. We also made statistics on the number of document pairs per query in the training data (for trial 1). The queries are clustered into different groups based on the the number of their associated document pairs. Figure 7 shows the distribution of the query groups. In the figure, for example, `0-1k'' is the group of queries whose number of document pairs are between 0 and 999. We can see that the numbers of document pairs really vary from query to query. Next we evaluated the accuracies of AdaRank.MAP and RankBoost in terms of MAP for each of the query group. The results are reported in Figure 8. We found that the average MAP of AdaRank.MAP over the groups is two points higher than RankBoost. Furthermore, it is interesting to see that AdaRank.MAP performs particularly better than RankBoost for queries with small numbers of document pairs (e.g., `0-1k'', `1k-2k'', and `2k-3k''). The results indicate that AdaRank.MAP can effectively avoid creating a model biased towards queries with more document pairs. For AdaRank.NDCG, similar results can be observed. 0.2 0.3 0.4 0.5 MAP query group RankBoost AdaRank.MAP Figure 8: Differences in MAP for different query groups. 0.30 0.31 0.32 0.33 0.34 trial 1 trial 2 trial 3 trial 4 MAP AdaRank.MAP AdaRank.NDCG Figure 9: MAP on training set when model is trained with MAP or NDCG@5. We further conducted an experiment to see whether AdaRank has the ability to improve the ranking accuracy in terms of a measure by using the measure in training. Specifically, we trained ranking models using AdaRank.MAP and AdaRank.NDCG and evaluated their accuracies on the training dataset in terms of both MAP and NDCG@5. The experiment was conducted for each trial. Figure 9 and Figure 10 show the results in terms of MAP and NDCG@5, respectively. We can see that, AdaRank.MAP trained with MAP performs better in terms of MAP while AdaRank.NDCG trained with NDCG@5 performs better in terms of NDCG@5. The results indicate that AdaRank can indeed enhance ranking performance in terms of a measure by using the measure in training. Finally, we tried to verify the correctness of Theorem 1. That is, the ranking accuracy in terms of the performance measure can be continuously improved, as long as e−δt min 1 − ϕ(t)2 < 1 holds. As an example, Figure 11 shows the learning curve of AdaRank.MAP in terms of MAP during the training phase in one trial of the cross validation. From the figure, we can see that the ranking accuracy of AdaRank.MAP steadily improves, as the training goes on, until it reaches to the peak. The result agrees well with Theorem 1. 5. CONCLUSION AND FUTURE WORK In this paper we have proposed a novel algorithm for learning ranking models in document retrieval, referred to as AdaRank. In contrast to existing methods, AdaRank optimizes a loss function that is directly defined on the performance measures. It employs a boosting technique in ranking model learning. AdaRank offers several advantages: ease of implementation, theoretical soundness, efficiency in training, and high accuracy in ranking. Experimental results based on four benchmark datasets show that AdaRank can significantly outperform the baseline methods of BM25, Ranking SVM, and RankBoost. 0.49 0.50 0.51 0.52 0.53 trial 1 trial 2 trial 3 trial 4 NDCG@5 AdaRank.MAP AdaRank.NDCG Figure 10: NDCG@5 on training set when model is trained with MAP or NDCG@5. 0.29 0.30 0.31 0.32 0 50 100 150 200 250 300 350 MAP number of rounds Figure 11: Learning curve of AdaRank. Future work includes theoretical analysis on the generalization error and other properties of the AdaRank algorithm, and further empirical evaluations of the algorithm including comparisons with other algorithms that can directly optimize performance measures. 6. ACKNOWLEDGMENTS We thank Harry Shum, Wei-Ying Ma, Tie-Yan Liu, Gu Xu, Bin Gao, Robert Schapire, and Andrew Arnold for their valuable comments and suggestions to this paper. 7. REFERENCES [1] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison Wesley, May 1999. [2] C. Burges, R. Ragno, and Q. Le. Learning to rank with nonsmooth cost functions. In Advances in Neural Information Processing Systems 18, pages 395-402. MIT Press, Cambridge, MA, 2006. [3] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In ICML 22, pages 89-96, 2005. [4] Y. Cao, J. Xu, T.-Y. Liu, H. Li, Y. Huang, and H.-W. Hon. Adapting ranking SVM to document retrieval. In SIGIR 29, pages 186-193, 2006. [5] D. Cossock and T. Zhang. Subset ranking using regression. In COLT, pages 605-619, 2006. [6] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu. Overview of the TREC 2003 web track. In TREC, pages 78-92, 2003. [7] N. Duffy and D. Helmbold. Boosting methods for regression. Mach. Learn., 47(2-3):153-200, 2002. [8] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933-969, 2003. [9] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119-139, 1997. [10] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 28(2):337-374, 2000. [11] G. Fung, R. Rosales, and B. Krishnapuram. Learning rankings via convex hull separation. In Advances in Neural Information Processing Systems 18, pages 395-402. MIT Press, Cambridge, MA, 2006. [12] T. Hastie, R. Tibshirani, and J. H. Friedman. The Elements of Statistical Learning. Springer, August 2001. [13] R. Herbrich, T. Graepel, and K. Obermayer. Large Margin rank boundaries for ordinal regression. MIT Press, Cambridge, MA, 2000. [14] W. Hersh, C. Buckley, T. J. Leone, and D. Hickam. Ohsumed: an interactive retrieval evaluation and new large test collection for research. In SIGIR, pages 192-201, 1994. [15] K. Jarvelin and J. Kekalainen. IR evaluation methods for retrieving highly relevant documents. In SIGIR 23, pages 41-48, 2000. [16] T. Joachims. Optimizing search engines using clickthrough data. In SIGKDD 8, pages 133-142, 2002. [17] T. Joachims. A support vector method for multivariate performance measures. In ICML 22, pages 377-384, 2005. [18] J. Lafferty and C. Zhai. Document language models, query models, and risk minimization for information retrieval. In SIGIR 24, pages 111-119, 2001. [19] D. A. Metzler, W. B. Croft, and A. McCallum. Direct maximization of rank-based metrics for information retrieval. Technical report, CIIR, 2005. [20] R. Nallapati. Discriminative models for information retrieval. In SIGIR 27, pages 64-71, 2004. [21] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project, 1998. [22] J. M. Ponte and W. B. Croft. A language modeling approach to information retrieval. In SIGIR 21, pages 275-281, 1998. [23] T. Qin, T.-Y. Liu, X.-D. Zhang, Z. Chen, and W.-Y. Ma. A study of relevance propagation for web search. In SIGIR 28, pages 408-415, 2005. [24] S. E. Robertson and D. A. Hull. The TREC-9 filtering track final report. In TREC, pages 25-40, 2000. [25] R. E. Schapire, Y. Freund, P. Barlett, and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. In ICML 14, pages 322-330, 1997. [26] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Mach. Learn., 37(3):297-336, 1999. [27] R. Song, J. Wen, S. Shi, G. Xin, T. yan Liu, T. Qin, X. Zheng, J. Zhang, G. Xue, and W.-Y. Ma. Microsoft Research Asia at web track and terabyte track of TREC 2004. In TREC, 2004. [28] A. Trotman. Learning to rank. Inf. Retr., 8(3):359-381, 2005. [29] J. Xu, Y. Cao, H. Li, and Y. Huang. Cost-sensitive learning of SVM for ranking. In ECML, pages 833-840, 2006. [30] G.-R. Xue, Q. Yang, H.-J. Zeng, Y. Yu, and Z. Chen. Exploiting the hierarchical structure for link analysis. In SIGIR 28, pages 186-193, 2005. [31] H. Yu. SVM selective sampling for ranking with application to data retrieval. In SIGKDD 11, pages 354-363, 2005. APPENDIX Here we give the proof of Theorem 1. P . Set ZT = m i=1 exp {−E(π(qi, di, fT ), yi)} and φ(t) = 1 2 (1 + ϕ(t)). According to the definition of αt, we know that eαt = φ(t) 1−φ(t) . ZT = m i=1 exp {−E(π(qi, di, fT−1 + αT hT ), yi)} = m i=1 exp −E(π(qi, di, fT−1), yi) − αT E(π(qi, di, hT ), yi) − δT i ≤ m i=1 exp {−E(π(qi, di, fT−1), yi)} exp {−αT E(π(qi, di, hT ), yi)} e−δT min = e−δT min ZT−1 m i=1 exp {−E(π(qi, di, fT−1), yi)} ZT−1 exp{−αT E(π(qi, di, hT ), yi)} = e−δT min ZT−1 m i=1 PT (i) exp{−αT E(π(qi, di, hT ), yi)}. Moreover, if E(π(qi, di, hT ), yi) ∈ [−1, +1] then, ZT ≤ e−δT minZT−1 m i=1 PT (i) 1+E(π(qi, di, hT ), yi) 2 e−αT + 1−E(π(qi, di, hT ), yi) 2 eαT = e−δT min ZT−1 φ(T) 1 − φ(T) φ(T) + (1 − φ(T)) φ(T) 1 − φ(T) = ZT−1e−δT min 4φ(T)(1 − φ(T)) ≤ ZT−2 T t=T−1 e−δt min 4φ(t)(1 − φ(t)) ≤ Z1 T t=2 e−δt min 4φ(t)(1 − φ(t)) = m m i=1 1 m exp{−E(π(qi, di, α1h1), yi)} T t=2 e−δt min 4φ(t)(1 − φ(t)) = m m i=1 1 m exp{−α1E(π(qi, di, h1), yi) − δ1 i } T t=2 e−δt min 4φ(t)(1 − φ(t)) ≤ me−δ1 min m i=1 1 m exp{−α1E(π(qi, di, h1), yi)} T t=2 e−δt min 4φ(t)(1 − φ(t)) ≤ m e−δ1 min 4φ(1)(1 − φ(1)) T t=2 e−δt min 4φ(t)(1 − φ(t)) = m T t=1 e−δt min 1 − ϕ(t)2. ∴ 1 m m i=1 E(π(qi, di, fT ), yi) ≥ 1 m m i=1 {1 − exp(−E(π(qi, di, fT ), yi))} ≥ 1 − T t=1 e−δt min 1 − ϕ(t)2.
AdaRank: A Boosting Algorithm for Information Retrieval ABSTRACT In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs ` weak rankers' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost. 1. INTRODUCTION Recently ` learning to rank' has gained increasing attention in both the fields of information retrieval and machine learning. When applied to document retrieval, learning to rank becomes a task as follows. In training, a ranking model is constructed with data consisting of queries, their corresponding retrieved documents, and relevance levels given by humans. In ranking, given a new query, the corresponding retrieved documents are sorted by using the trained ranking model. In document retrieval, usually ranking results are evaluated in terms of performance measures such as MAP (Mean Average Precision) [1] and NDCG (Normalized Discounted Cumulative Gain) [15]. Ideally, the ranking function is created so that the accuracy of ranking in terms of one of the measures with respect to the training data is maximized. Several methods for learning to rank have been developed and applied to document retrieval. For example, Herbrich et al. [13] propose a learning algorithm for ranking on the basis of Support Vector Machines, called Ranking SVM. Freund et al. [8] take a similar approach and perform the learning by using boosting, referred to as RankBoost. All the existing methods used for document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss functions loosely related to the IR performance measures, not loss functions directly based on the measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. In this paper, we aim to develop a new learning algorithm that can directly optimize any performance measure used in document retrieval. Inspired by the work of AdaBoost for classification [9], we propose to develop a boosting algorithm for information retrieval, referred to as AdaRank. AdaRank utilizes a linear combination of ` weak rankers' as its model. In learning, it repeats the process of re-weighting the training sample, creating a weak ranker, and calculating a weight for the ranker. We show that AdaRank algorithm can iteratively optimize an exponential loss function based on any of IR performance measures. A lower bound of the performance on training data is given, which indicates that the ranking accuracy in terms of the performance measure can be continuously improved during the training process. AdaRank offers several advantages: ease in implementation, theoretical soundness, efficiency in training, and high accuracy in ranking. Experimental results indicate that AdaRank can outperform the baseline methods of BM25, Ranking SVM, and RankBoost, on four benchmark datasets including OHSUMED, WSJ, AP, and. Gov. Tuning ranking models using certain training data and a performance measure is a common practice in IR [1]. As the number of features in the ranking model gets larger and the amount of training data gets larger, the tuning becomes harder. From the viewpoint of IR, AdaRank can be viewed as a machine learning method for ranking model tuning. Recently, direct optimization of performance measures in learning has become a hot research topic. Several methods for classifi cation [17] and ranking [5, 19] have been proposed. AdaRank can be viewed as a machine learning method for direct optimization of performance measures, based on a different approach. The rest of the paper is organized as follows. After a summary of related work in Section 2, we describe the proposed AdaRank algorithm in details in Section 3. Experimental results and discussions are given in Section 4. Section 5 concludes this paper and gives future work. 2. RELATED WORK 2.1 Information Retrieval The key problem for document retrieval is ranking, specifically, how to create the ranking model (function) that can sort documents based on their relevance to the given query. It is a common practice in IR to tune the parameters of a ranking model using some labeled data and one performance measure [1]. For example, the state-ofthe-art methods of BM25 [24] and LMIR (Language Models for Information Retrieval) [18, 22] all have parameters to tune. As the ranking models become more sophisticated (more features are used) and more labeled data become available, how to tune or train ranking models turns out to be a challenging issue. Recently methods of ` learning to rank' have been applied to ranking model construction and some promising results have been obtained. For example, Joachims [16] applies Ranking SVM to document retrieval. He utilizes click-through data to deduce training data for the model creation. Cao et al. [4] adapt Ranking SVM to document retrieval by modifying the Hinge Loss function to better meet the requirements of IR. Specifically, they introduce a Hinge Loss function that heavily penalizes errors on the tops of ranking lists and errors from queries with fewer retrieved documents. Burges et al. [3] employ Relative Entropy as a loss function and Gradient Descent as an algorithm to train a Neural Network model for ranking in document retrieval. The method is referred to as ` RankNet'. 2.2 Machine Learning There are three topics in machine learning which are related to our current work. They are ` learning to rank', boosting, and direct optimization of performance measures. Learning to rank is to automatically create a ranking function that assigns scores to instances and then rank the instances by using the scores. Several approaches have been proposed to tackle the problem. One major approach to learning to rank is that of transforming it into binary classification on instance pairs. This ` pair-wise' approach fits well with information retrieval and thus is widely used in IR. Typical methods of the approach include Ranking SVM [13], RankBoost [8], and RankNet [3]. For other approaches to learning to rank, refer to [2, 11, 31]. In the pair-wise approach to ranking, the learning task is formalized as a problem of classifying instance pairs into two categories (correctly ranked and incorrectly ranked). Actually, it is known that reducing classification errors on instance pairs is equivalent to maximizing a lower bound of MAP [16]. In that sense, the existing methods of Ranking SVM, RankBoost, and RankNet are only able to minimize loss functions that are loosely related to the IR performance measures. Boosting is a general technique for improving the accuracies of machine learning algorithms. The basic idea of boosting is to repeatedly construct ` weak learners' by re-weighting training data and form an ensemble of weak learners such that the total performance of the ensemble is ` boosted'. Freund and Schapire have proposed the first well-known boosting algorithm called AdaBoost (Adaptive Boosting) [9], which is designed for binary classification (0-1 prediction). Later, Schapire & Singer have introduced a generalized version of AdaBoost in which weak learners can give confidence scores in their predictions rather than make 0-1 decisions [26]. Extensions have been made to deal with the problems of multi-class classification [10, 26], regression [7], and ranking [8]. In fact, AdaBoost is an algorithm that ingeniously constructs a linear model by minimizing the ` exponential loss function' with respect to the training data [26]. Our work in this paper can be viewed as a boosting method developed for ranking, particularly for ranking in IR. Recently, a number of authors have proposed conducting direct optimization of multivariate performance measures in learning. For instance, Joachims [17] presents an SVM method to directly optimize nonlinear multivariate performance measures like the F1 measure for classification. Cossock & Zhang [5] find a way to approximately optimize the ranking performance measure DCG [15]. Metzler et al. [19] also propose a method of directly maximizing rank-based metrics for ranking on the basis of manifold learning. AdaRank is also one that tries to directly optimize multivariate performance measures, but is based on a different approach. AdaRank is unique in that it employs an exponential loss function based on IR performance measures and a boosting technique. 3. OUR METHOD: ADARANK 3.1 General Framework We first describe the general framework of learning to rank for document retrieval. In retrieval (testing), given a query the system returns a ranking list of documents in descending order of the relevance scores. The relevance scores are calculated with a ranking function (model). In learning (training), a number of queries and their corresponding retrieved documents are given. Furthermore, the relevance levels of the documents with respect to the queries are also provided. The relevance levels are represented as ranks (i.e., categories in a total order). The objective of learning is to construct a ranking function which achieves the best results in ranking of the training data in the sense of minimization of a loss function. Ideally the loss function is defined on the basis of the performance measure used in testing. Suppose that Y = {r1, r2, · · ·, re] is a set of ranks, where B denotes the number of ranks. There exists a total order between the ranks re> re_1> · · ·> r1, where `>' denotes a preference relationship. In training, a set of queries Q = {q1, q2, · · ·, qm] is given. Each query qi is associated with a list of retrieved documents di = {di1, di2, · · ·, di, n (qi)] and a list of labels yi = {yi1, yi2, · · ·, yi, n (qi)], where n (qi) denotes the sizes of lists di and yi, dij denotes the jth document in di, and yij E Y denotes the rank of document dij. A feature vector ~ xi j = Ψ (qi, dij) E X is created from each query-document pair (qi, dij), i = 1, 2, · · ·, m; j = 1, 2, · · ·, n (qi). Thus, the training set can be represented as S = {(qi, di, yi)] mi = 1. The objective of learning is to create a ranking function f: X 7 → R, such that for each query the elements in its corresponding document list can be assigned relevance scores using the function and then be ranked according to the scores. Specifically, we create a permutation of integers 7r (qi, di, f) for query qi, the corresponding list of documents di, and the ranking function f. Let di = {di1, di2, · · ·, di, n (qi)] be identified by the list of integers {1, 2, · · ·, n (qi)], then permutation 7r (qi, di, f) is defined as a bijection from {1, 2, · · ·, n (qi)] to itself. We use 7r (j) to denote the position of item j (i.e., dij). The learning process turns out to be that of minimizing the loss function which represents the disagreement between the permutation 7r (qi, di, f) and the list of ranks yi, for all of the queries. Table 1: Notations and explanations. In the paper, we define the rank model as a linear combination of t = 1 αtht -LRB-~x-RRB-, where ht -LRB-~x-RRB- is a weak ranker, αt is its weight, and T is the number of weak rankers. In information retrieval, query-based performance measures are used to evaluate the ` goodness' of a ranking function. By query based measure, we mean a measure defined over a ranking list of documents with respect to a query. These measures include MAP, NDCG, MRR (Mean Reciprocal Rank), WTA (Winners Take ALL), and Precision@n [1, 15]. We utilize a general function E (Sr (qi, di, f), yi) ∈ [− 1, +1] to represent the performance measures. The first argument of E is the permutation Sr created using the ranking function f on di. The second argument is the list of ranks yi given by humans. E measures the agreement between Sr and yi. Table 1 gives a summary of notations described above. Next, as examples of performance measures, we present the definitions of MAP and NDCG. Given a query qi, the corresponding list of ranks yi, and a permutation Sri on di, average precision for qi is defined as: where yij takes on 1 and 0 as values, representing being relevant or irrelevant and Pi (j) is defined as precision at the position of dij: where Sri (j) denotes the position of dij. Given a query qi, the list of ranks yi, and a permutation Sri on di, NDCG at position m for qi is defined as: where yij takes on ranks as values and ni is a normalization constant. ni is chosen so that a perfect ranking Sr ∗ i's NDCG score at position m is 1. 3.2 Algorithm Inspired by the AdaBoost algorithm for classification, we have devised a novel algorithm which can optimize a loss function based on the IR performance measures. The algorithm is referred to as ` AdaRank' and is shown in Figure 1. AdaRank takes a training set S = {(qi, di, yi)} m i = 1 as input and takes the performance measure function E and the number of iterations T as parameters. AdaRank runs T rounds and at each round it creates a weak ranker ht (t = 1, · · ·, T). Finally, it outputs a ranking model f by linearly combining the weak rankers. At each round, AdaRank maintains a distribution of weights over the queries in the training data. We denote the distribution of weights • Create weak ranker ht with weighted distribution Pt on training data S. • Choose αt • Create ft • Update Pt +1 Figure 1: The AdaRank algorithm. at round t as Pt and the weight on the ith training query qi at round t as Pt (i). Initially, AdaRank sets equal weights to the queries. At each round, it increases the weights of those queries that are not ranked well by ft, the model created so far. As a result, the learning at the next round will be focused on the creation of a weak ranker that can work on the ranking of those ` hard' queries. At each round, a weak ranker ht is constructed based on training data with weight distribution Pt. The goodness of a weak ranker is measured by the performance measure E weighted by Pt: Several methods for weak ranker construction can be considered. For example, a weak ranker can be created by using a subset of queries (together with their document list and label list) sampled according to the distribution Pt. In this paper, we use single features as weak rankers, as will be explained in Section 3.6. Once a weak ranker ht is built, AdaRank chooses a weight αt> 0 for the weak ranker. Intuitively, αt measures the importance of ht. A ranking model ft is created at each round by linearly combining the weak rankers constructed so far h1, · · ·, ht with weights α1, · · ·, αt. ft is then used for updating the distribution Pt +1. 3.3 Theoretical Analysis The existing learning algorithms for ranking attempt to minimize a loss function based on instance pairs (document pairs). In contrast, AdaRank tries to optimize a loss function based on queries. Furthermore, the loss function in AdaRank is defined on the basis of general IR performance measures. The measures can be MAP, NDCG, WTA, MRR, or any other measures whose range is within [− 1, +1]. We next explain why this is the case. Ideally we want to maximize the ranking accuracy in terms of a performance measure on the training data: where F is the set of possible ranking functions. This is equivalent to minimizing the loss on the training data It is difficult to directly optimize the loss, because E is a noncontinuous function and thus may be difficult to handle. We instead attempt to minimize an upper bound of the loss in (5) exp {− E (7r (qi, di, f), yi)}, (6) because e − x ≥ 1 − x holds for any x ∈ <. We consider the use of a linear combination of weak rankers as our ranking model: where H is the set of possible weak rankers, αt is a positive weight, and (ft − 1 + αtht) -LRB-~x-RRB- = ft − 1 -LRB-~x-RRB- + αtht -LRB-~x-RRB-. Several ways of computing coefficients αt and weak rankers ht may be considered. Following the idea of AdaBoost, in AdaRank we take the approach of ` forward stage-wise additive modeling' [12] and get the algorithm in Figure 1. It can be proved that there exists a lower bound on the ranking accuracy for AdaRank on training data, as presented in Theorem 1. where Sp (t) = Zmi = 1 Pt (i) E (7r (qi, di, ht), yi), 6t min = mini = 1, · · ·, m 6ti, and A proof of the theorem can be found in appendix. The theorem implies that the ranking accuracy in terms of the performance measure can be continuously improved, as long as e − 6t min V1 − Sp (t) 2 <1 holds. 3.4 Advantages AdaRank is a simple yet powerful method. More importantly, it is a method that can be justified from the theoretical viewpoint, as discussed above. In addition AdaRank has several other advantages when compared with the existing learning to rank methods such as Ranking SVM, RankBoost, and RankNet. First, AdaRank can incorporate any performance measure, provided that the measure is query based and in the range of [− 1, +1]. Notice that the major IR measures meet this requirement. In contrast the existing methods only minimize loss functions that are loosely related to the IR measures [16]. Second, the learning process of AdaRank is more efficient than those of the existing learning algorithms. The time complexity of AdaRank is of order O ((k + T) · m · n log n), where k denotes the number of features, T the number of rounds, m the number of queries in training data, and n is the maximum number of documents for queries in training data. The time complexity of RankBoost, for example, is of order O (T · m · n2) [8]. Third, AdaRank employs a more reasonable framework for performing the ranking task than the existing methods. Specifically in AdaRank the instances correspond to queries, while in the existing methods the instances correspond to document pairs. As a result, AdaRank does not have the following shortcomings that plague the existing methods. (a) The existing methods have to make a strong assumption that the document pairs from the same query are independently distributed. In reality, this is clearly not the case and this problem does not exist for AdaRank. (b) Ranking the most relevant documents on the tops of document lists is crucial for document retrieval. The existing methods cannot focus on the training on the tops, as indicated in [4]. Several methods for rectifying the problem have been proposed (e.g., [4]), however, they do not seem to fundamentally solve the problem. In contrast, AdaRank can naturally focus on training on the tops of document lists, because the performance measures used favor rankings for which relevant documents are on the tops. (c) In the existing methods, the numbers of document pairs vary from query to query, resulting in creating models biased toward queries with more document pairs, as pointed out in [4]. AdaRank does not have this drawback, because it treats queries rather than document pairs as basic units in learning. 3.5 Differences from AdaBoost AdaRank is a boosting algorithm. In that sense, it is similar to AdaBoost, but it also has several striking differences from AdaBoost. First, the types of instances are different. AdaRank makes use of queries and their corresponding document lists as instances. The labels in training data are lists of ranks (relevance levels). AdaBoost makes use of feature vectors as instances. The labels in training data are simply +1 and − 1. Second, the performance measures are different. In AdaRank, the performance measure is a generic measure, defined on the document list and the rank list of a query. In AdaBoost the corresponding performance measure is a specific measure for binary classification, also referred to as ` margin' [25]. Third, the ways of updating weights are also different. In AdaBoost, the distribution of weights on training instances is calculated according to the current distribution and the performance of the current weak learner. In AdaRank, in contrast, it is calculated according to the performance of the ranking model created so far, as shown in Figure 1. Note that AdaBoost can also adopt the weight updating method used in AdaRank. For AdaBoost they are equivalent (cf., [12] page 305). However, this is not true for AdaRank. 3.6 Construction of Weak Ranker We consider an efficient implementation for weak ranker construction, which is also used in our experiments. In the implementation, as weak ranker we choose the feature that has the optimal weighted performance among all of the features: Pt (i) E (7r (qi, di, xk), yi). Creating weak rankers in this way, the learning process turns out to be that of repeatedly selecting features and linearly combining the selected features. Note that features which are not selected in the training phase will have a weight of zero. 4. EXPERIMENTAL RESULTS We conducted experiments to test the performances of AdaRank using four benchmark datasets: OHSUMED, WSJ, AP, and. Gov. Table 2: Features used in the experiments on OHSUMED, WSJ, and AP datasets. C (w, d) represents frequency of word w in document d; C represents the entire collection; n denotes number of terms in query; | · | denotes the size function; and idf (·) denotes inverse document frequency. Figure 2: Ranking accuracies on OHSUMED data. 4.1 Experiment Setting Ranking SVM [13, 16] and RankBoost [8] were selected as baselines in the experiments, because they are the state-of-the-art learning to rank methods. Furthermore, BM25 [24] was used as a baseline, representing the state-of-the-arts IR method (we actually used the tool Lemur1). For AdaRank, the parameter T was determined automatically during each experiment. Specifically, when there is no improvement in ranking accuracy in terms of the performance measure, the iteration stops (and T is determined). As the measure E, MAP and NDCG@5 were utilized. The results for AdaRank using MAP and NDCG@5 as measures in training are represented as AdaRank.MAP and AdaRank.NDCG, respectively. 4.2 Experiment with OHSUMED Data In this experiment, we made use of the OHSUMED dataset [14] to test the performances of AdaRank. The OHSUMED dataset consists of 348,566 documents and 106 queries. There are in total 16,140 query-document pairs upon which relevance judgments are made. The relevance judgments are either ` d' (definitely relevant), ` p' (possibly relevant), or ` n' (not relevant). The data have been used in many experiments in IR, for example [4, 29]. As features, we adopted those used in document retrieval [4]. Table 2 shows the features. For example, tf (term frequency), idf (inverse document frequency), dl (document length), and combinations of them are defined as features. BM25 score itself is also a feature. Stop words were removed and stemming was conducted in the data. We randomly divided queries into four even subsets and conducted 4-fold cross-validation experiments. We tuned the parameters for BM25 during one of the trials and applied them to the other trials. The results reported in Figure 2 are those averaged over four trials. In MAP calculation, we define the rank ` d' as relevant and Table 3: Statistics on WSJ and AP datasets. Figure 3: Ranking accuracies on WSJ dataset. the other two ranks as irrelevant. From Figure 2, we see that both AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking SVM, and RankBoost in terms of all measures. We conducted significant tests (t-test) on the improvements of AdaRank.MAP over BM25, Ranking SVM, and RankBoost in terms of MAP. The results indicate that all the improvements are statistically significant (p-value <0.05). We also conducted t-test on the improvements of AdaRank.NDCG over BM25, Ranking SVM, and RankBoost in terms of NDCG@5. The improvements are also statistically significant. 4.3 Experiment with WSJ and AP Data In this experiment, we made use of the WSJ and AP datasets from the TREC ad-hoc retrieval track, to test the performances of AdaRank. WSJ contains 74,520 articles of Wall Street Journals from 1990 to 1992, and AP contains 158,240 articles of Associated Press in 1988 and 1990. 200 queries are selected from the TREC topics (No. 101 ∼ No. 300). Each query has a number of documents associated and they are labeled as ` relevant' or ` irrelevant' (to the query). Following the practice in [28], the queries that have less than 10 relevant documents were discarded. Table 3 shows the statistics on the two datasets. In the same way as in section 4.2, we adopted the features listed in Table 2 for ranking. We also conducted 4-fold cross-validation experiments. The results reported in Figure 3 and 4 are those averaged over four trials on WSJ and AP datasets, respectively. From Figure 3 and 4, we can see that AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking SVM, and RankBoost in terms of all measures on both WSJ and AP. We conducted t-tests on the improvements of AdaRank.MAP and AdaRank.NDCG over BM25, Ranking SVM, and RankBoost on WSJ and AP. The results indicate that all the improvements in terms of MAP are statistically significant (p-value <0.05). However only some of the improvements in terms of NDCG@5 are statistically significant, although overall the improvements on NDCG scores are quite high (1-2 points). 4.4 Experiment with. Gov Data In this experiment, we further made use of the TREC. Gov data to test the performance of AdaRank for the task of web retrieval. The corpus is a crawl from the. gov domain in early 2002, and has been used at TREC Web Track since 2002. There are a total Figure 4: Ranking accuracies on AP dataset. Figure 5: Ranking accuracies on. Gov dataset. Table 4: Features used in the experiments on. Gov dataset. of 1,053,110 web pages with 11,164,829 hyperlinks in the data. The 50 queries in the topic distillation task in the Web Track of TREC 2003 [6] were used. The ground truths for the queries are provided by the TREC committee with binary judgment: relevant or irrelevant. The number of relevant pages vary from query to query (from 1 to 86). We extracted 14 features from each query-document pair. Table 4 gives a list of the features. They are the outputs of some well-known algorithms (systems). These features are different from those in Table 2, because the task is different. Again, we conducted 4-fold cross-validation experiments. The results averaged over four trials are reported in Figure 5. From the results, we can see that AdaRank.MAP and AdaRank.NDCG outperform all the baselines in terms of all measures. We conducted ttests on the improvements of AdaRank.MAP and AdaRank.NDCG over BM25, Ranking SVM, and RankBoost. Some of the improvements are not statistically significant. This is because we have only 50 queries used in the experiments, and the number of queries is too small. 4.5 Discussions We investigated the reasons that AdaRank outperforms the baseline methods, using the results of the OHSUMED dataset as examples. First, we examined the reason that AdaRank has higher performances than Ranking SVM and RankBoost. Specifically we com Figure 6: Accuracy on ranking document pairs with OHSUMED dataset. Figure 7: Distribution of queries with different number of document pairs in training data of trial 1. pared the error rates between different rank pairs made by Ranking SVM, RankBoost, AdaRank.MAP, and AdaRank.NDCG on the test data. The results averaged over four trials in the 4-fold cross validation are shown in Figure 6. We use ` d-n' to stand for the pairs between ` definitely relevant' and ` not relevant', ` d-p' the pairs between ` definitely relevant' and ` partially relevant', and ` p-n' the pairs between ` partially relevant' and ` not relevant'. From Figure 6, we can see that AdaRank.MAP and AdaRank.NDCG make fewer errors for ` d-n' and ` d-p', which are related to the tops of rankings and are important. This is because AdaRank.MAP and AdaRank.NDCG can naturally focus upon the training on the tops by optimizing MAP and NDCG@5, respectively. We also made statistics on the number of document pairs per query in the training data (for trial 1). The queries are clustered into different groups based on the the number of their associated document pairs. Figure 7 shows the distribution of the query groups. In the figure, for example, ` 0-1k' is the group of queries whose number of document pairs are between 0 and 999. We can see that the numbers of document pairs really vary from query to query. Next we evaluated the accuracies of AdaRank.MAP and RankBoost in terms of MAP for each of the query group. The results are reported in Figure 8. We found that the average MAP of AdaRank.MAP over the groups is two points higher than RankBoost. Furthermore, it is interesting to see that AdaRank.MAP performs particularly better than RankBoost for queries with small numbers of document pairs (e.g., ` 0-1k', ` 1k-2k', and ` 2k-3k'). The results indicate that AdaRank.MAP can effectively avoid creating a model biased towards queries with more document pairs. For AdaRank.NDCG, similar results can be observed. Figure 8: Differences in MAP for different query groups. Figure 9: MAP on training set when model is trained with MAP or NDCG@5. We further conducted an experiment to see whether AdaRank has the ability to improve the ranking accuracy in terms of a measure by using the measure in training. Specifically, we trained ranking models using AdaRank.MAP and AdaRank.NDCG and evaluated their accuracies on the training dataset in terms of both MAP and NDCG@5. The experiment was conducted for each trial. Figure 9 and Figure 10 show the results in terms of MAP and NDCG@5, respectively. We can see that, AdaRank.MAP trained with MAP performs better in terms of MAP while AdaRank.NDCG trained with NDCG@5 performs better in terms of NDCG@5. The results indicate that AdaRank can indeed enhance ranking performance in terms of a measure by using the measure in training. Finally, we tried to verify the correctness of Theorem 1. That is, the ranking accuracy in terms of the performance measure can be continuously improved, as long as e − δt min V1 − ϕ (t) 2 <1 holds. As an example, Figure 11 shows the learning curve of AdaRank.MAP in terms of MAP during the training phase in one trial of the cross validation. From the figure, we can see that the ranking accuracy of AdaRank.MAP steadily improves, as the training goes on, until it reaches to the peak. The result agrees well with Theorem 1. 5. CONCLUSION AND FUTURE WORK In this paper we have proposed a novel algorithm for learning ranking models in document retrieval, referred to as AdaRank. In contrast to existing methods, AdaRank optimizes a loss function that is directly defined on the performance measures. It employs a boosting technique in ranking model learning. AdaRank offers several advantages: ease of implementation, theoretical soundness, efficiency in training, and high accuracy in ranking. Experimental results based on four benchmark datasets show that AdaRank can significantly outperform the baseline methods of BM25, Ranking SVM, and RankBoost. Figure 10: NDCG@5 on training set when model is trained with MAP or NDCG@5. Figure 11: Learning curve of AdaRank. Future work includes theoretical analysis on the generalization error and other properties of the AdaRank algorithm, and further empirical evaluations of the algorithm including comparisons with other algorithms that can directly optimize performance measures.
AdaRank: A Boosting Algorithm for Information Retrieval ABSTRACT In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs ` weak rankers' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost. 1. INTRODUCTION Recently ` learning to rank' has gained increasing attention in both the fields of information retrieval and machine learning. When applied to document retrieval, learning to rank becomes a task as follows. In training, a ranking model is constructed with data consisting of queries, their corresponding retrieved documents, and relevance levels given by humans. In ranking, given a new query, the corresponding retrieved documents are sorted by using the trained ranking model. In document retrieval, usually ranking results are evaluated in terms of performance measures such as MAP (Mean Average Precision) [1] and NDCG (Normalized Discounted Cumulative Gain) [15]. Ideally, the ranking function is created so that the accuracy of ranking in terms of one of the measures with respect to the training data is maximized. Several methods for learning to rank have been developed and applied to document retrieval. For example, Herbrich et al. [13] propose a learning algorithm for ranking on the basis of Support Vector Machines, called Ranking SVM. Freund et al. [8] take a similar approach and perform the learning by using boosting, referred to as RankBoost. All the existing methods used for document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss functions loosely related to the IR performance measures, not loss functions directly based on the measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. In this paper, we aim to develop a new learning algorithm that can directly optimize any performance measure used in document retrieval. Inspired by the work of AdaBoost for classification [9], we propose to develop a boosting algorithm for information retrieval, referred to as AdaRank. AdaRank utilizes a linear combination of ` weak rankers' as its model. In learning, it repeats the process of re-weighting the training sample, creating a weak ranker, and calculating a weight for the ranker. We show that AdaRank algorithm can iteratively optimize an exponential loss function based on any of IR performance measures. A lower bound of the performance on training data is given, which indicates that the ranking accuracy in terms of the performance measure can be continuously improved during the training process. AdaRank offers several advantages: ease in implementation, theoretical soundness, efficiency in training, and high accuracy in ranking. Experimental results indicate that AdaRank can outperform the baseline methods of BM25, Ranking SVM, and RankBoost, on four benchmark datasets including OHSUMED, WSJ, AP, and. Gov. Tuning ranking models using certain training data and a performance measure is a common practice in IR [1]. As the number of features in the ranking model gets larger and the amount of training data gets larger, the tuning becomes harder. From the viewpoint of IR, AdaRank can be viewed as a machine learning method for ranking model tuning. Recently, direct optimization of performance measures in learning has become a hot research topic. Several methods for classifi cation [17] and ranking [5, 19] have been proposed. AdaRank can be viewed as a machine learning method for direct optimization of performance measures, based on a different approach. The rest of the paper is organized as follows. After a summary of related work in Section 2, we describe the proposed AdaRank algorithm in details in Section 3. Experimental results and discussions are given in Section 4. Section 5 concludes this paper and gives future work. 2. RELATED WORK 2.1 Information Retrieval The key problem for document retrieval is ranking, specifically, how to create the ranking model (function) that can sort documents based on their relevance to the given query. It is a common practice in IR to tune the parameters of a ranking model using some labeled data and one performance measure [1]. For example, the state-ofthe-art methods of BM25 [24] and LMIR (Language Models for Information Retrieval) [18, 22] all have parameters to tune. As the ranking models become more sophisticated (more features are used) and more labeled data become available, how to tune or train ranking models turns out to be a challenging issue. Recently methods of ` learning to rank' have been applied to ranking model construction and some promising results have been obtained. For example, Joachims [16] applies Ranking SVM to document retrieval. He utilizes click-through data to deduce training data for the model creation. Cao et al. [4] adapt Ranking SVM to document retrieval by modifying the Hinge Loss function to better meet the requirements of IR. Specifically, they introduce a Hinge Loss function that heavily penalizes errors on the tops of ranking lists and errors from queries with fewer retrieved documents. Burges et al. [3] employ Relative Entropy as a loss function and Gradient Descent as an algorithm to train a Neural Network model for ranking in document retrieval. The method is referred to as ` RankNet'. 2.2 Machine Learning There are three topics in machine learning which are related to our current work. They are ` learning to rank', boosting, and direct optimization of performance measures. Learning to rank is to automatically create a ranking function that assigns scores to instances and then rank the instances by using the scores. Several approaches have been proposed to tackle the problem. One major approach to learning to rank is that of transforming it into binary classification on instance pairs. This ` pair-wise' approach fits well with information retrieval and thus is widely used in IR. Typical methods of the approach include Ranking SVM [13], RankBoost [8], and RankNet [3]. For other approaches to learning to rank, refer to [2, 11, 31]. In the pair-wise approach to ranking, the learning task is formalized as a problem of classifying instance pairs into two categories (correctly ranked and incorrectly ranked). Actually, it is known that reducing classification errors on instance pairs is equivalent to maximizing a lower bound of MAP [16]. In that sense, the existing methods of Ranking SVM, RankBoost, and RankNet are only able to minimize loss functions that are loosely related to the IR performance measures. Boosting is a general technique for improving the accuracies of machine learning algorithms. The basic idea of boosting is to repeatedly construct ` weak learners' by re-weighting training data and form an ensemble of weak learners such that the total performance of the ensemble is ` boosted'. Freund and Schapire have proposed the first well-known boosting algorithm called AdaBoost (Adaptive Boosting) [9], which is designed for binary classification (0-1 prediction). Later, Schapire & Singer have introduced a generalized version of AdaBoost in which weak learners can give confidence scores in their predictions rather than make 0-1 decisions [26]. Extensions have been made to deal with the problems of multi-class classification [10, 26], regression [7], and ranking [8]. In fact, AdaBoost is an algorithm that ingeniously constructs a linear model by minimizing the ` exponential loss function' with respect to the training data [26]. Our work in this paper can be viewed as a boosting method developed for ranking, particularly for ranking in IR. Recently, a number of authors have proposed conducting direct optimization of multivariate performance measures in learning. For instance, Joachims [17] presents an SVM method to directly optimize nonlinear multivariate performance measures like the F1 measure for classification. Cossock & Zhang [5] find a way to approximately optimize the ranking performance measure DCG [15]. Metzler et al. [19] also propose a method of directly maximizing rank-based metrics for ranking on the basis of manifold learning. AdaRank is also one that tries to directly optimize multivariate performance measures, but is based on a different approach. AdaRank is unique in that it employs an exponential loss function based on IR performance measures and a boosting technique. 3. OUR METHOD: ADARANK 3.1 General Framework 3.2 Algorithm 3.3 Theoretical Analysis 3.4 Advantages 3.5 Differences from AdaBoost 3.6 Construction of Weak Ranker 4. EXPERIMENTAL RESULTS 4.1 Experiment Setting 4.2 Experiment with OHSUMED Data 4.3 Experiment with WSJ and AP Data 4.4 Experiment with. Gov Data 4.5 Discussions 5. CONCLUSION AND FUTURE WORK In this paper we have proposed a novel algorithm for learning ranking models in document retrieval, referred to as AdaRank. In contrast to existing methods, AdaRank optimizes a loss function that is directly defined on the performance measures. It employs a boosting technique in ranking model learning. AdaRank offers several advantages: ease of implementation, theoretical soundness, efficiency in training, and high accuracy in ranking. Experimental results based on four benchmark datasets show that AdaRank can significantly outperform the baseline methods of BM25, Ranking SVM, and RankBoost. Figure 10: NDCG@5 on training set when model is trained with MAP or NDCG@5. Figure 11: Learning curve of AdaRank. Future work includes theoretical analysis on the generalization error and other properties of the AdaRank algorithm, and further empirical evaluations of the algorithm including comparisons with other algorithms that can directly optimize performance measures.
AdaRank: A Boosting Algorithm for Information Retrieval ABSTRACT In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs ` weak rankers' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost. 1. INTRODUCTION Recently ` learning to rank' has gained increasing attention in both the fields of information retrieval and machine learning. When applied to document retrieval, learning to rank becomes a task as follows. In training, a ranking model is constructed with data consisting of queries, their corresponding retrieved documents, and relevance levels given by humans. In ranking, given a new query, the corresponding retrieved documents are sorted by using the trained ranking model. Ideally, the ranking function is created so that the accuracy of ranking in terms of one of the measures with respect to the training data is maximized. Several methods for learning to rank have been developed and applied to document retrieval. For example, Herbrich et al. [13] propose a learning algorithm for ranking on the basis of Support Vector Machines, called Ranking SVM. Freund et al. [8] take a similar approach and perform the learning by using boosting, referred to as RankBoost. All the existing methods used for document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss functions loosely related to the IR performance measures, not loss functions directly based on the measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. In this paper, we aim to develop a new learning algorithm that can directly optimize any performance measure used in document retrieval. Inspired by the work of AdaBoost for classification [9], we propose to develop a boosting algorithm for information retrieval, referred to as AdaRank. AdaRank utilizes a linear combination of ` weak rankers' as its model. We show that AdaRank algorithm can iteratively optimize an exponential loss function based on any of IR performance measures. A lower bound of the performance on training data is given, which indicates that the ranking accuracy in terms of the performance measure can be continuously improved during the training process. AdaRank offers several advantages: ease in implementation, theoretical soundness, efficiency in training, and high accuracy in ranking. Experimental results indicate that AdaRank can outperform the baseline methods of BM25, Ranking SVM, and RankBoost, on four benchmark datasets including OHSUMED, WSJ, AP, and. Gov. Tuning ranking models using certain training data and a performance measure is a common practice in IR [1]. As the number of features in the ranking model gets larger and the amount of training data gets larger, the tuning becomes harder. From the viewpoint of IR, AdaRank can be viewed as a machine learning method for ranking model tuning. Recently, direct optimization of performance measures in learning has become a hot research topic. Several methods for classifi cation [17] and ranking [5, 19] have been proposed. AdaRank can be viewed as a machine learning method for direct optimization of performance measures, based on a different approach. After a summary of related work in Section 2, we describe the proposed AdaRank algorithm in details in Section 3. Experimental results and discussions are given in Section 4. Section 5 concludes this paper and gives future work. 2. RELATED WORK 2.1 Information Retrieval The key problem for document retrieval is ranking, specifically, how to create the ranking model (function) that can sort documents based on their relevance to the given query. It is a common practice in IR to tune the parameters of a ranking model using some labeled data and one performance measure [1]. Recently methods of ` learning to rank' have been applied to ranking model construction and some promising results have been obtained. For example, Joachims [16] applies Ranking SVM to document retrieval. He utilizes click-through data to deduce training data for the model creation. Cao et al. [4] adapt Ranking SVM to document retrieval by modifying the Hinge Loss function to better meet the requirements of IR. Specifically, they introduce a Hinge Loss function that heavily penalizes errors on the tops of ranking lists and errors from queries with fewer retrieved documents. Burges et al. [3] employ Relative Entropy as a loss function and Gradient Descent as an algorithm to train a Neural Network model for ranking in document retrieval. The method is referred to as ` RankNet'. 2.2 Machine Learning There are three topics in machine learning which are related to our current work. They are ` learning to rank', boosting, and direct optimization of performance measures. Learning to rank is to automatically create a ranking function that assigns scores to instances and then rank the instances by using the scores. Several approaches have been proposed to tackle the problem. One major approach to learning to rank is that of transforming it into binary classification on instance pairs. This ` pair-wise' approach fits well with information retrieval and thus is widely used in IR. Typical methods of the approach include Ranking SVM [13], RankBoost [8], and RankNet [3]. For other approaches to learning to rank, refer to [2, 11, 31]. In the pair-wise approach to ranking, the learning task is formalized as a problem of classifying instance pairs into two categories (correctly ranked and incorrectly ranked). In that sense, the existing methods of Ranking SVM, RankBoost, and RankNet are only able to minimize loss functions that are loosely related to the IR performance measures. Boosting is a general technique for improving the accuracies of machine learning algorithms. Our work in this paper can be viewed as a boosting method developed for ranking, particularly for ranking in IR. Recently, a number of authors have proposed conducting direct optimization of multivariate performance measures in learning. For instance, Joachims [17] presents an SVM method to directly optimize nonlinear multivariate performance measures like the F1 measure for classification. Cossock & Zhang [5] find a way to approximately optimize the ranking performance measure DCG [15]. Metzler et al. [19] also propose a method of directly maximizing rank-based metrics for ranking on the basis of manifold learning. AdaRank is also one that tries to directly optimize multivariate performance measures, but is based on a different approach. AdaRank is unique in that it employs an exponential loss function based on IR performance measures and a boosting technique. 5. CONCLUSION AND FUTURE WORK In this paper we have proposed a novel algorithm for learning ranking models in document retrieval, referred to as AdaRank. In contrast to existing methods, AdaRank optimizes a loss function that is directly defined on the performance measures. It employs a boosting technique in ranking model learning. AdaRank offers several advantages: ease of implementation, theoretical soundness, efficiency in training, and high accuracy in ranking. Experimental results based on four benchmark datasets show that AdaRank can significantly outperform the baseline methods of BM25, Ranking SVM, and RankBoost. Figure 10: NDCG@5 on training set when model is trained with MAP or NDCG@5. Figure 11: Learning curve of AdaRank.
I-61
Distributed Agent-Based Air Traffic Flow Management
Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today. The FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars. Finding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume. This problem is particularly complex as it requires the integration and/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace). In this paper we use FACET -- an air traffic flow simulator developed at NASA and used extensively by the FAA and industry -- to test a multi-agent algorithm for traffic flow management. An agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix. Agents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion. Our FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation).
[ "traffic flow", "air traffic control", "reinforc learn", "reinforc learn", "congest", "multiag system", "optim", "futur atm concept evalu tool", "new method of estim agent reward", "deploy strategi" ]
[ "P", "P", "P", "P", "P", "M", "U", "U", "M", "U" ]
Distributed Agent-Based Air Traffic Flow Management Kagan Tumer Oregon State University 204 Rogers Hall Corvallis, OR 97331, USA kagan.tumer@oregonstate.edu Adrian Agogino UCSC, NASA Ames Research Center Mailstop 269-3 Moffett Field, CA 94035, USA adrian@email.arc.nasa.gov ABSTRACT Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today. The FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars. Finding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume. This problem is particularly complex as it requires the integration and/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace). In this paper we use FACET - an air traffic flow simulator developed at NASA and used extensively by the FAA and industry - to test a multi-agent algorithm for traffic flow management. An agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix. Agents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion. Our FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation). Categories and Subject Descriptors I.2.11 [Computing Methodologies]: Artificial IntelligenceMultiagent systems General Terms Algorithms, Performance 1. INTRODUCTION The efficient, safe and reliable management of our ever increasing air traffic is one of the fundamental challenges facing the aerospace industry today. On a typical day, more than 40,000 commercial flights operate within the US airspace [14]. In order to efficiently and safely route this air traffic, current traffic flow control relies on a centralized, hierarchical routing strategy that performs flow projections ranging from one to six hours. As a consequence, the system is slow to respond to developing weather or airport conditions leading potentially minor local delays to cascade into large regional congestions. In 2005, weather, routing decisions and airport conditions caused 437,667 delays, accounting for 322,272 hours of delays. The total cost of these delays was estimated to exceed three billion dollars by industry [7]. Furthermore, as the traffic flow increases, the current procedures increase the load on the system, the airports, and the air traffic controllers (more aircraft per region) without providing any of them with means to shape the traffic patterns beyond minor reroutes. The Next Generation Air Transportation Systems (NGATS) initiative aims to address this issues and, not only account for a threefold increase in traffic, but also for the increasing heterogeneity of aircraft and decreasing restrictions on flight paths. Unlike many other flow problems where the increasing traffic is to some extent absorbed by improved hardware (e.g., more servers with larger memories and faster CPUs for internet routing) the air traffic domain needs to find mainly algorithmic solutions, as the infrastructure (e.g., number of the airports) will not change significantly to impact the flow problem. There is therefore a strong need to explore new, distributed and adaptive solutions to the air flow control problem. An adaptive, multi-agent approach is an ideal fit to this naturally distributed problem where the complex interaction among the aircraft, airports and traffic controllers renders a pre-determined centralized solution severely suboptimal at the first deviation from the expected plan. Though a truly distributed and adaptive solution (e.g., free flight where aircraft can choose almost any path) offers the most potential in terms of optimizing flow, it also provides the most radical departure from the current system. As a consequence, a shift to such a system presents tremendous difficulties both in terms of implementation (e.g., scheduling and airport capacity) and political fallout (e.g., impact on air traffic controllers). In this paper, we focus on agent based system that can be implemented readily. In this approach, we assign an 342 978-81-904262-7-5 (RPS) c 2007 IFAAMAS agent to a fix, a specific location in 2D. Because aircraft flight plans consist of a sequence of fixes, this representation allows localized fixes (or agents) to have direct impact on the flow of air traffic1 . In this approach, the agents'' actions are to set the separation that approaching aircraft are required to keep. This simple agent-action pair allows the agents to slow down or speed up local traffic and allows agents to a have significant impact on the overall air traffic flow. Agents learn the most appropriate separation for their location using a reinforcement learning (RL) algorithm [15]. In a reinforcement learning approach, the selection of the agent reward has a large impact on the performance of the system. In this work, we explore four different agent reward functions, and compare them to simulating various changes to the system and selecting the best solution (e.g, equivalent to a Monte-Carlo search). The first explored reward consisted of the system reward. The second reward was a personalized agent reward based on collectives [3, 17, 18]. The last two rewards were personalized rewards based on estimations to lower the computational burden of the reward computation. All three personalized rewards aim to align agent rewards with the system reward and ensure that the rewards remain sensitive to the agents'' actions. Previous work in this domain fell into one of two distinct categories: The first principles based modeling approaches used by domain experts [5, 8, 10, 13] and the algorithmic approaches explored by the learning and/or agents community [6, 9, 12]. Though our approach comes from the second category, we aim to bridge the gap by using FACET to test our algorithms, a simulator introduced and widely used (i.e., over 40 organizations and 5000 users) by work in the first category [4, 11]. The main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and test that algorithm using FACET. In Section 2, we describe the air traffic flow problem and the simulation tool, FACET. In Section 3, we present the agent-based approach, focusing on the selection of the agents and their action space along with the agents'' learning algorithms and reward structures. In Section 4 we present results in domains with one and two congestions, explore different trade-offs of the system objective function, discuss the scaling properties of the different agent rewards and discuss the computational cost of achieving certain levels of performance. Finally, in Section 5, we discuss the implications of these results and provide and map the required work to enable the FAA to reach its stated goal of increasing the traffic volume by threefold. 2. AIR TRAFFIC FLOW MANAGEMENT With over 40,000 flights operating within the United States airspace on an average day, the management of traffic flow is a complex and demanding problem. Not only are there concerns for the efficiency of the system, but also for fairness (e.g., different airlines), adaptability (e.g., developing weather patterns), reliability and safety (e.g., airport management). In order to address such issues, the management of this traffic flow occurs over four hierarchical levels: 1. Separation assurance (2-30 minute decisions); 1 We discuss how flight plans with few fixes can be handled in more detail in Section 2. 2. Regional flow (20 minutes to 2 hours); 3. National flow (1-8 hours); and 4. Dynamic airspace configuration (6 hours to 1 year). Because of the strict guidelines and safety concerns surrounding aircraft separation, we will not address that control level in this paper. Similarly, because of the business and political impact of dynamic airspace configuration, we will not address the outermost flow control level either. Instead, we will focus on the regional and national flow management problems, restricting our impact to decisions with time horizons between twenty minutes and eight hours. The proposed algorithm will fit between long term planning by the FAA and the very short term decisions by air traffic controllers. The continental US airspace consists of 20 regional centers (handling 200-300 flights on a given day) and 830 sectors (handling 10-40 flights). The flow control problem has to address the integration of policies across these sectors and centers, account for the complexity of the system (e.g., over 5200 public use airports and 16,000 air traffic controllers) and handle changes to the policies caused by weather patterns. Two of the fundamental problems in addressing the flow problem are: (i) modeling and simulating such a large complex system as the fidelity required to provide reliable results is difficult to achieve; and (ii) establishing the method by which the flow management is evaluated, as directly minimizing the total delay may lead to inequities towards particular regions or commercial entities. Below, we discuss how we addressed both issues, namely, we present FACET a widely used simulation tool and discuss our system evaluation function. Figure 1: FACET screenshot displaying traffic routes and air flow statistics. 2.1 FACET FACET (Future ATM Concepts Evaluation Tool), a physics based model of the US airspace was developed to accurately model the complex air traffic flow problem [4]. It is based on propagating the trajectories of proposed flights forward in time. FACET can be used to either simulate and display air traffic (a 24 hour slice with 60,000 flights takes 15 minutes to simulate on a 3 GHz, 1 GB RAM computer) or provide rapid statistics on recorded data (4D trajectories for 10,000 flights including sectors, airports, and fix statistics in 10 seconds on the same computer) [11]. FACET is extensively used by The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 343 the FAA, NASA and industry (over 40 organizations and 5000 users) [11]. FACET simulates air traffic based on flight plans and through a graphical user interface allows the user to analyze congestion patterns of different sectors and centers (Figure 1). FACET also allows the user to change the flow patterns of the aircraft through a number of mechanisms, including metering aircraft through fixes. The user can then observe the effects of these changes to congestion. In this paper, agents use FACET directly through batch mode, where agents send scripts to FACET asking it to simulate air traffic based on metering orders imposed by the agents. The agents then produce their rewards based on receive feedback from FACET about the impact of these meterings. 2.2 System Evaluation The system performance evaluation function we select focuses on delay and congestion but does not account for fairness impact on different commercial entities. Instead it focuses on the amount of congestion in a particular sector and on the amount of measured air traffic delay. The linear combination of these two terms gives the full system evaluation function, G(z) as a function of the full system state z. More precisely, we have: G(z) = −((1 − α)B(z) + αC(z)) , (1) where B(z) is the total delay penalty for all aircraft in the system, and C(z) is the total congestion penalty. The relative importance of these two penalties is determined by the value of α, and we explore various trade-offs based on α in Section 4. The total delay, B, is a sum of delays over a set of sectors S and is given by: B(z) = X s∈S Bs(z) (2) where Bs(z) = X t Θ(t − τs)kt,s(t − τs) , (3) where ks,t is the number of aircraft in sector s at a particular time, τs is a predetermined time, and Θ(·) is the step function that equals 1 when its argument is greater or equal to zero, and has a value of zero otherwise. Intuitively, Bs(z) provides the total number of aircraft that remain in a sector s past a predetermined time τs, and scales their contribution to count by the amount by which they are late. In this manner Bs(z) provides a delay factor that not only accounts for all aircraft that are late, but also provides a scale to measure their lateness. This definition is based on the assumption that most aircraft should have reached the sector by time τs and that aircraft arriving after this time are late. In this paper the value of τs is determined by assessing aircraft counts in the sector in the absence of any intervention or any deviation from predicted paths. Similarly, the total congestion penalty is a sum over the congestion penalties over the sectors of observation, S: C(z) = X s∈S Cs(z) (4) where Cs(z) = a X t Θ(ks,t − cs)eb(ks,t−cs) , (5) where a and b are normalizing constants, and cs is the capacity of sector s as defined by the FAA. Intuitively, Cs(z) penalizes a system state where the number of aircraft in a sector exceeds the FAAs official sector capacity. Each sector capacity is computed using various metrics which include the number of air traffic controllers available. The exponential penalty is intended to provide strong feedback to return the number of aircraft in a sector to below the FAA mandated capacities. 3. AGENT BASED AIR TRAFFIC FLOW The multi agent approach to air traffic flow management we present is predicated on adaptive agents taking independent actions that maximize the system evaluation function discussed above. To that end, there are four critical decisions that need to be made: agent selection, agent action set selection, agent learning algorithm selection and agent reward structure selection. 3.1 Agent Selection Selecting the aircraft as agents is perhaps the most obvious choice for defining an agent. That selection has the advantage that agent actions can be intuitive (e.g., change of flight plan, increase or decrease speed and altitude) and offer a high level of granularity, in that each agent can have its own policy. However, there are several problems with that approach. First, there are in excess of 40,000 aircraft in a given day, leading to a massively large multi-agent system. Second, as the agents would not be able to sample their state space sufficiently, learning would be prohibitively slow. As an alternative, we assign agents to individual ground locations throughout the airspace called fixes. Each agent is then responsible for any aircraft going through its fix. Fixes offer many advantages as agents: 1. Their number can vary depending on need. The system can have as many agents as required for a given situation (e.g., agents coming live around an area with developing weather conditions). 2. Because fixes are stationary, collecting data and matching behavior to reward is easier. 3. Because aircraft flight plans consist of fixes, agent will have the ability to affect traffic flow patterns. 4. They can be deployed within the current air traffic routing procedures, and can be used as tools to help air traffic controllers rather than compete with or replace them. Figure 2 shows a schematic of this agent based system. Agents surrounding a congestion or weather condition affect the flow of traffic to reduce the burden on particular regions. 3.2 Agent Actions The second issue that needs to be addressed, is determining the action set of the agents. Again, an obvious choice may be for fixes to bid on aircraft, affecting their flight plans. Though appealing from a free flight perspective, that approach makes the flight plans too unreliable and significantly complicates the scheduling problem (e.g., arrival at airports and the subsequent gate assignment process). Instead, we set the actions of an agent to determining the separation (distance between aircraft) that aircraft have 344 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) to maintain, when going through the agent``s fix. This is known as setting the Miles in Trail or MIT. When an agent sets the MIT value to d, aircraft going towards its fix are instructed to line up and keep d miles of separation (though aircraft will always keep a safe distance from each other regardless of the value of d). When there are many aircraft going through a fix, the effect of issuing higher MIT values is to slow down the rate of aircraft that go through the fix. By increasing the value of d, an agent can limit the amount of air traffic downstream of its fix, reducing congestion at the expense of increasing the delays upstream. Figure 2: Schematic of agent architecture. The agents corresponding to fixes surrounding a possible congestion become live and start setting new separation times. 3.3 Agent Learning The objective of each agent is to learn the best values of d that will lead to the best system performance, G. In this paper we assume that each agent will have a reward function and will aim to maximize its reward using its own reinforcement learner [15] (though alternatives such as evolving neuro-controllers are also effective [1]). For complex delayed-reward problems, relatively sophisticated reinforcement learning systems such as temporal difference may have to be used. However, due to our agent selection and agent action set, the air traffic congestion domain modeled in this paper only needs to utilize immediate rewards. As a consequence, simple table-based immediate reward reinforcement learning is used. Our reinforcement learner is equivalent to an -greedy Q-learner with a discount rate of 0 [15]. At every episode an agent takes an action and then receives a reward evaluating that action. After taking action a and receiving reward R an agent updates its Q table (which contains its estimate of the value for taking that action [15]) as follows: Q (a) = (1 − l)Q(a) + l(R), (6) where l is the learning rate. At every time step the agent chooses the action with the highest table value with probability 1 − and chooses a random action with probability . In the experiments described in this paper, α is equal to 0.5 and is equal to 0.25. The parameters were chosen experimentally, though system performance was not overly sensitive to these parameters. 3.4 Agent Reward Structure The final issue that needs to be addressed is selecting the reward structure for the learning agents. The first and most direct approach is to let each agent receive the system performance as its reward. However, in many domains such a reward structure leads to slow learning. We will therefore also set up a second set of reward structures based on agent-specific rewards. Given that agents aim to maximize their own rewards, a critical task is to create good agent rewards, or rewards that when pursued by the agents lead to good overall system performance. In this work we focus on difference rewards which aim to provide a reward that is both sensitive to that agent``s actions and aligned with the overall system reward [2, 17, 18]. 3.4.1 Difference Rewards Consider difference rewards of the form [2, 17, 18]: Di ≡ G(z) − G(z − zi + ci) , (7) where zi is the action of agent i. All the components of z that are affected by agent i are replaced with the fixed constant ci 2 . In many situations it is possible to use a ci that is equivalent to taking agent i out of the system. Intuitively this causes the second term of the difference reward to evaluate the performance of the system without i and therefore D evaluates the agent``s contribution to the system performance. There are two advantages to using D: First, because the second term removes a significant portion of the impact of other agents in the system, it provides an agent with a cleaner signal than G. This benefit has been dubbed learnability (agents have an easier time learning) in previous work [2, 17]. Second, because the second term does not depend on the actions of agent i, any action by agent i that improves D, also improves G. This term which measures the amount of alignment between two rewards has been dubbed factoredness in previous work [2, 17]. 3.4.2 Estimates of Difference Rewards Though providing a good compromise between aiming for system performance and removing the impact of other agents from an agent``s reward, one issue that may plague D is computational cost. Because it relies on the computation of the counterfactual term G(z − zi + ci) (i.e., the system performance without agent i) it may be difficult or impossible to compute, particularly when the exact mathematical form of G is not known. Let us focus on G functions in the following form: G(z) = Gf (f(z)), (8) where Gf () is non-linear with a known functional form and, f(z) = X i fi(zi) , (9) where each fi is an unknown non-linear function. We assume that we can sample values from f(z), enabling us to compute G, but that we cannot sample from each fi(zi). 2 This notation uses zero padding and vector addition rather than concatenation to form full state vectors from partial state vectors. The vector zi in our notation would be ziei in standard vector notation, where ei is a vector with a value of 1 in the ith component and is zero everywhere else. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 345 In addition, we assume that Gf is much easier to compute than f(z), or that we may not be able to even compute f(z) directly and must sample it from a black box computation. This form of G matches our system evaluation in the air traffic domain. When we arrange agents so that each aircraft is typically only affected by a single agent, each agent``s impact of the counts of the number of aircraft in a sector, kt,s, will be mostly independent of the other agents. These values of kt,s are the f(z)s in our formulation and the penalty functions form Gf . Note that given aircraft counts, the penalty functions (Gf ) can be easily computed in microseconds, while aircraft counts (f) can only be computed by running FACET taking on the order of seconds. To compute our counterfactual G(z − zi + ci) we need to compute: Gf (f(z − zi + ci)) = Gf 0 @ X j=i fj(zj) + fi(ci) 1 A (10) = Gf (f(z) − fi(zi) + fi(ci)) . (11) Unfortunately, we cannot compute this directly as the values of fi(zi) are unknown. However, if agents take actions independently (it does not observe how other agents act before taking its own action) we can take advantage of the linear form of f(z) in the fis with the following equality: E(f−i(z−i)|zi) = E(f−i(z−i)|ci) (12) where E(f−i(z−i)|zi) is the expected value of all of the fs other than fi given the value of zi and E(f−i(z−i)|ci) is the expected value of all of the fs other than fi given the value of zi is changed to ci. We can then estimate f(z − zi + ci): f(z) − fi(zi) + fi(ci) = f(z) − fi(zi) + fi(ci) + E(f−i(z−i)|ci) − E(f−i(z−i)|zi) = f(z) − E(fi(zi)|zi) + E(fi(ci)|ci) + E(f−i(z−i)|ci) − E(f−i(z−i)|zi) = f(z) − E(f(z)|zi) + E(f(z)|ci) . Therefore we can evaluate Di = G(z) − G(z − zi + ci) as: Dest1 i = Gf (f(z)) − Gf (f(z) − E(f(z)|zi) + E(f(z)|ci)) , (13) leaving us with the task of estimating the values of E(f(z)|zi) and E(f(z)|ci)). These estimates can be computed by keeping a table of averages where we average the values of the observed f(z) for each value of zi that we have seen. This estimate should improve as the number of samples increases. To improve our estimates, we can set ci = E(z) and if we make the mean squared approximation of f(E(z)) ≈ E(f(z)) then we can estimate G(z) − G(z − zi + ci) as: Dest2 i = Gf (f(z)) − Gf (f(z) − E(f(z)|zi) + E(f(z))) . (14) This formulation has the advantage in that we have more samples at our disposal to estimate E(f(z)) than we do to estimate E(f(z)|ci)). 4. SIMULATION RESULTS In this paper we test the performance of our agent based air traffic optimization method on a series of simulations using the FACET air traffic simulator. In all experiments we test the performance of five different methods. The first method is Monte Carlo estimation, where random policies are created, with the best policy being chosen. The other four methods are agent based methods where the agents are maximizing one of the following rewards: 1. The system reward, G(z), as define in Equation 1. 2. The difference reward, Di(z), assuming that agents can calculate counterfactuals. 3. Estimation to the difference reward, Dest1 i (z), where agents estimate the counterfactual using E(f(z)|zi) and E(f(z)|ci). 4. Estimation to the difference reward, Dest2 i (z), where agents estimate the counterfactual using E(f(z)|zi) and E(f(z)). These methods are first tested on an air traffic domain with 300 aircraft, where 200 of the aircraft are going through a single point of congestion over a four hour simulation. Agents are responsible for reducing congestion at this single point, while trying to minimize delay. The methods are then tested on a more difficult problem, where a second point of congestion is added with the 100 remaining aircraft going through this second point of congestion. In all experiments the goal of the system is to maximize the system performance given by G(z) with the parameters, a = 50, b = 0.3, τs1 equal to 200 minutes and τs1 equal to 175 minutes. These values of τ are obtained by examining the time at which most of the aircraft leave the sectors, when no congestion control is being performed. Except where noted, the trade-off between congestion and lateness, α is set to 0.5. In all experiments to make the agent results comparable to the Monte Carlo estimation, the best policies chosen by the agents are used in the results. All results are an average of thirty independent trials with the differences in the mean (σ/ √ n) shown as error bars, though in most cases the error bars are too small to see. Figure 3: Performance on single congestion problem, with 300 Aircraft , 20 Agents and α = .5. 4.1 Single Congestion In the first experiment we test the performance of the five methods when there is a single point of congestion, with twenty agents. This point of congestion is created by setting up a series of flight plans that cause the number of aircraft in 346 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) the sector of interest to be significantly more than the number allowed by the FAA. The results displayed in Figures 3 and 4 show the performance of all five algorithms on two different system evaluations. In both cases, the agent based methods significantly outperform the Monte Carlo method. This result is not surprising since the agent based methods intelligently explore their space, where as the Monte Carlo method explores the space randomly. Figure 4: Performance on single congestion problem, with 300 Aircraft , 20 Agents and α = .75. Among the agent based methods, agents using difference rewards perform better than agents using the system reward. Again this is not surprising, since with twenty agents, an agent directly trying to maximize the system reward has difficulty determining the effect of its actions on its own reward. Even if an agent takes an action that reduces congestion and lateness, other agents at the same time may take actions that increase congestion and lateness, causing the agent to wrongly believe that its action was poor. In contrast agents using the difference reward have more influence over the value of their own reward, therefore when an agent takes a good action, the value of this action is more likely to be reflected in its reward. This experiment also shows that estimating the difference reward is not only possible, but also quite effective, when the true value of the difference reward cannot be computed. While agents using the estimates do not achieve as high of results as agents using the true difference reward, they still perform significantly better than agents using the system reward. Note, however, that the benefit of the estimated difference rewards are only present later in learning. Earlier in learning, the estimates are poor, and agents using the estimated difference rewards perform no better then agents using the system reward. 4.2 Two Congestions In the second experiment we test the performance of the five methods on a more difficult problem with two points of congestion. On this problem the first region of congestion is the same as in the previous problem, and the second region of congestion is added in a different part of the country. The second congestion is less severe than the first one, so agents have to form different policies depending which point of congestion they are influencing. Figure 5: Performance on two congestion problem, with 300 Aircraft, 20 Agents and α = .5. Figure 6: Performance on two congestion problem, with 300 Aircraft, 50 Agents and α = .5. The results displayed in Figure 5 show that the relative performance of the five methods is similar to the single congestion case. Again agent based methods perform better than the Monte Carlo method and the agents using difference rewards perform better than agents using the system reward. To verify that the performance improvement of our methods is maintained when there are a different number of agents, we perform additional experiments with 50 agents. The results displayed in Figure 6 show that indeed the relative performances of the methods are comparable when the number of agents is increased to 50. Figure 7 shows scaling results and demonstrates that the conclusions hold over a wide range of number of agents. Agents using Dest2 perform slightly better than agents using Dest1 in all cases but for 50 agents. This slight advantage stems from Dest2 providing the agents with a cleaner signal, since its estimate uses more data points. 4.3 Penalty Tradeoffs The system evaluation function used in the experiments is G(z) = −((1−α)D(z)+αC(z)), which comprises of penalties for both congestion and lateness. This evaluation function The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 347 Figure 7: Impact of number of agents on system performance. Two congestion problem, with 300 Aircraft and α = .5. forces the agents to tradeoff these relative penalties depending on the value of α. With high α the optimization focuses on reducing congestion, while with low α the system focuses on reducing lateness. To verify that the results obtained above are not specific to a particular value of α, we repeat the experiment with 20 agents for α = .75. Figure 8 shows that qualitatively the relative performance of the algorithms remain the same. Next, we perform a series of experiments where α ranges from 0.0 to 1.0 . Figure 9 shows the results which lead to three interesting observations: • First, there is a zero congestion penalty solution. This solution has agents enforce large MIT values to block all air traffic, which appears viable when the system evaluation does not account for delays. All algorithms find this solution, though it is of little interest in practice due to the large delays it would cause. • Second, if the two penalties were independent, an optimal solution would be a line from the two end points. Therefore, unless D is far from being optimal, the two penalties are not independent. Note that for α = 0.5 the difference between D and this hypothetical line is as large as it is anywhere else, making α = 0.5 a reasonable choice for testing the algorithms in a difficult setting. • Third, Monte Carlo and G are particularly poor at handling multiple objectives. For both algorithms, the performance degrades significantly for mid-ranges of α. 4.4 Computational Cost The results in the previous section show the performance of the different algorithms after a specific number of episodes. Those results show that D is significantly superior to the other algorithms. One question that arises, though, is what computational overhead D puts on the system, and what results would be obtained if the additional computational expense of D is made available to the other algorithms. The computation cost of the system evaluation, G (Equation 1) is almost entirely dependent on the computation of Figure 8: Performance on two congestion problem, with 300 Aircraft, 20 Agents and α = .75. Figure 9: Tradeoff Between Objectives on two congestion problem, with 300 Aircraft and 20 Agents. Note that Monte Carlo and G are particularly bad at handling multiple objectives. the airplane counts for the sectors kt,s, which need to be computed using FACET. Except when D is used, the values of k are computed once per episode. However, to compute the counterfactual term in D, if FACET is treated as a black box, each agent would have to compute their own values of k for their counterfactual resulting in n + 1 computations of k per episode. While it may be possible to streamline the computation of D with some knowledge of the internals of FACET, given the complexity of the FACET simulation, it is not unreasonable in this case to treat it as a black box. Table 1 shows the performance of the algorithms after 2100 G computations for each of the algorithms for the simulations presented in Figure 5 where there were 20 agents, 2 congestions and α = .5. All the algorithms except the fully computed D reach 2100 k computations at time step 2100. D however computes k once for the system, and then once for each agent, leading to 21 computations per time step. It therefore reaches 2100 computations at time step 100. We also show the results of the full D computation at t=2100, which needs 44100 computations of k as D44K . 348 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: System Performance for 20 Agents, 2 congestions and α = .5, after 2100 G evaluations (except for D44K which has 44100 G evaluations at t=2100). Reward G σ/ √ n time Dest2 -232.5 7.55 2100 Dest1 -234.4 6.83 2100 D -277.0 7.8 100 D44K -219.9 4.48 2100 G -412.6 13.6 2100 MC -639.0 16.4 2100 Although D44K provides the best result by a slight margin, it is achieved at a considerable computational cost. Indeed, the performance of the two D estimates is remarkable in this case as they were obtained with about twenty times fewer computations of k. Furthermore, the two D estimates, significantly outperform the full D computation for a given number of computations of k and validate the assumptions made in Section 3.4.2. This shows that for this domain, in practice it is more fruitful to perform more learning steps and approximate D, than few learning steps with full D computation when we treat FACET as a black box. 5. DISCUSSION The efficient, safe and reliable management of air traffic flow is a complex problem, requiring solutions that integrate control policies with time horizons ranging from minutes up to a year. The main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and to test that algorithm using FACET, a simulation tool widely used by the FAA, NASA and the industry. Our method is based on agents representing fixes and having each agent determine the separation between aircraft approaching its fix. It offers the significant benefit of not requiring radical changes to the current air flow management structure and is therefore readily deployable. The agents use reinforcement learning to learn control policies and we explore different agent reward functions and different ways of estimating those functions. We are currently extending this work in three directions. First, we are exploring new methods of estimating agent rewards, to further speed up the simulations. Second we are investigating deployment strategies and looking for modifications that would have larger impact. One such modification is to extend the definition of agents from fixes to sectors, giving agents more opportunity to control the traffic flow, and allow them to be more efficient in eliminating congestion. Finally, in cooperation with domain experts, we are investigating different system evaluation functions, above and beyond the delay and congestion dependent G presented in this paper. Acknowledgments: The authors thank Banavar Sridhar for his invaluable help in describing both current air traffic flow management and NGATS, and Shon Grabbe for his detailed tutorials on FACET. 6. REFERENCES [1] A. Agogino and K. Tumer. Efficient evaluation functions for multi-rover systems. In The Genetic and Evolutionary Computation Conference, pages 1-12, Seatle, WA, June 2004. [2] A. Agogino and K. Tumer. Multi agent reward analysis for learning in noisy domains. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multi-Agent Systems, Utrecht, Netherlands, July 2005. [3] A. K. Agogino and K. Tumer. Handling communiction restrictions and team formation in congestion games. Journal of Autonous Agents and Multi Agent Systems, 13(1):97-115, 2006. [4] K. D. Bilimoria, B. Sridhar, G. B. Chatterji, K. S. Shethand, and S. R. Grabbe. FACET: Future ATM concepts evaluation tool. Air Traffic Control Quarterly, 9(1), 2001. [5] Karl D. Bilimoria. A geometric optimization approach to aircraft conflict resolution. In AIAA Guidance, Navigation, and Control Conf, Denver, CO, 2000. [6] Martin S. Eby and Wallace E. Kelly III. Free flight separation assurance using distributed algorithms. In Proc of Aerospace Conf, 1999, Aspen, CO, 1999. [7] FAA OPSNET data Jan-Dec 2005. US Department of Transportation website. [8] S. Grabbe and B. Sridhar. Central east pacific flight routing. In AIAA Guidance, Navigation, and Control Conference and Exhibit, Keystone, CO, 2006. [9] Jared C. Hill, F. Ryan Johnson, James K. Archibald, Richard L. Frost, and Wynn C. Stirling. A cooperative multi-agent approach to free flight. In AAMAS ``05: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, pages 1083-1090, New York, NY, USA, 2005. ACM Press. [10] P. K. Menon, G. D. Sweriduk, and B. Sridhar. Optimal strategies for free flight air traffic conflict resolution. Journal of Guidance, Control, and Dynamics, 22(2):202-211, 1999. [11] 2006 NASA Software of the Year Award Nomination. FACET: Future ATM concepts evaluation tool. Case no. ARC-14653-1, 2006. [12] M. Pechoucek, D. Sislak, D. Pavlicek, and M. Uller. Autonomous agents for air-traffic deconfliction. In Proc of the Fifth Int Jt Conf on Autonomous Agents and Multi-Agent Systems, Hakodate, Japan, May 2006. [13] B. Sridhar and S. Grabbe. Benefits of direct-to in national airspace system. In AIAA Guidance, Navigation, and Control Conf, Denver, CO, 2000. [14] B. Sridhar, T. Soni, K. Sheth, and G. B. Chatterji. Aggregate flow model for air-traffic management. Journal of Guidance, Control, and Dynamics, 29(4):992-997, 2006. [15] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. [16] C. Tomlin, G. Pappas, and S. Sastry. Conflict resolution for air traffic management. IEEE Tran on Automatic Control, 43(4):509-521, 1998. [17] K. Tumer and D. Wolpert, editors. Collectives and the Design of Complex Systems. Springer, New York, 2004. [18] D. H. Wolpert and K. Tumer. Optimal payoff functions for members of collectives. Advances in Complex Systems, 4(2/3):265-279, 2001. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 349
Distributed Agent-Based Air Traffic Flow Management ABSTRACT Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today. The FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars. Finding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume. This problem is particularly complex as it requires the integration and/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace). In this paper we use FACET--an air traffic flow simulator developed at NASA and used extensively by the FAA and industry--to test a multi-agent algorithm for traffic flow management. An agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix. Agents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion. Our FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation). 1. INTRODUCTION The efficient, safe and reliable management of our ever increasing air traffic is one of the fundamental challenges facing the aerospace industry today. On a typical day, more than 40,000 commercial flights operate within the US airspace [14]. In order to efficiently and safely route this air traffic, current traffic flow control relies on a centralized, hierarchical routing strategy that performs flow projections ranging from one to six hours. As a consequence, the system is slow to respond to developing weather or airport conditions leading potentially minor local delays to cascade into large regional congestions. In 2005, weather, routing decisions and airport conditions caused 437,667 delays, accounting for 322,272 hours of delays. The total cost of these delays was estimated to exceed three billion dollars by industry [7]. Furthermore, as the traffic flow increases, the current procedures increase the load on the system, the airports, and the air traffic controllers (more aircraft per region) without providing any of them with means to shape the traffic patterns beyond minor reroutes. The Next Generation Air Transportation Systems (NGATS) initiative aims to address this issues and, not only account for a threefold increase in traffic, but also for the increasing heterogeneity of aircraft and decreasing restrictions on flight paths. Unlike many other flow problems where the increasing traffic is to some extent absorbed by improved hardware (e.g., more servers with larger memories and faster CPUs for internet routing) the air traffic domain needs to find mainly algorithmic solutions, as the infrastructure (e.g., number of the airports) will not change significantly to impact the flow problem. There is therefore a strong need to explore new, distributed and adaptive solutions to the air flow control problem. An adaptive, multi-agent approach is an ideal fit to this naturally distributed problem where the complex interaction among the aircraft, airports and traffic controllers renders a pre-determined centralized solution severely suboptimal at the first deviation from the expected plan. Though a truly distributed and adaptive solution (e.g., free flight where aircraft can choose almost any path) offers the most potential in terms of optimizing flow, it also provides the most radical departure from the current system. As a consequence, a shift to such a system presents tremendous difficulties both in terms of implementation (e.g., scheduling and airport capacity) and political fallout (e.g., impact on air traffic controllers). In this paper, we focus on agent based system that can be implemented readily. In this approach, we assign an agent to a "fix," a specific location in 2D. Because aircraft flight plans consist of a sequence of fixes, this representation allows localized fixes (or agents) to have direct impact on the flow of air traffic1. In this approach, the agents' actions are to set the separation that approaching aircraft are required to keep. This simple agent-action pair allows the agents to slow down or speed up local traffic and allows agents to a have significant impact on the overall air traffic flow. Agents learn the most appropriate separation for their location using a reinforcement learning (RL) algorithm [15]. In a reinforcement learning approach, the selection of the agent reward has a large impact on the performance of the system. In this work, we explore four different agent reward functions, and compare them to simulating various changes to the system and selecting the best solution (e.g, equivalent to a Monte-Carlo search). The first explored reward consisted of the system reward. The second reward was a personalized agent reward based on collectives [3, 17, 18]. The last two rewards were personalized rewards based on estimations to lower the computational burden of the reward computation. All three personalized rewards aim to align agent rewards with the system reward and ensure that the rewards remain sensitive to the agents' actions. Previous work in this domain fell into one of two distinct categories: The first principles based modeling approaches used by domain experts [5, 8, 10, 13] and the algorithmic approaches explored by the learning and/or agents community [6, 9, 12]. Though our approach comes from the second category, we aim to bridge the gap by using FACET to test our algorithms, a simulator introduced and widely used (i.e., over 40 organizations and 5000 users) by work in the first category [4, 11]. The main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and test that algorithm using FACET. In Section 2, we describe the air traffic flow problem and the simulation tool, FACET. In Section 3, we present the agent-based approach, focusing on the selection of the agents and their action space along with the agents' learning algorithms and reward structures. In Section 4 we present results in domains with one and two congestions, explore different trade-offs of the system objective function, discuss the scaling properties of the different agent rewards and discuss the computational cost of achieving certain levels of performance. Finally, in Section 5, we discuss the implications of these results and provide and map the required work to enable the FAA to reach its stated goal of increasing the traffic volume by threefold. 2. AIR TRAFFIC FLOW MANAGEMENT With over 40,000 flights operating within the United States airspace on an average day, the management of traffic flow is a complex and demanding problem. Not only are there concerns for the efficiency of the system, but also for fairness (e.g., different airlines), adaptability (e.g., developing weather patterns), reliability and safety (e.g., airport management). In order to address such issues, the management of this traffic flow occurs over four hierarchical levels: 1. Separation assurance (2-30 minute decisions); 1We discuss how flight plans with few fixes can be handled in more detail in Section 2. 2. Regional flow (20 minutes to 2 hours); 3. National flow (1-8 hours); and 4. Dynamic airspace configuration (6 hours to 1 year). Because of the strict guidelines and safety concerns surrounding aircraft separation, we will not address that control level in this paper. Similarly, because of the business and political impact of dynamic airspace configuration, we will not address the outermost flow control level either. Instead, we will focus on the regional and national flow management problems, restricting our impact to decisions with time horizons between twenty minutes and eight hours. The proposed algorithm will fit between long term planning by the FAA and the very short term decisions by air traffic controllers. The continental US airspace consists of 20 regional centers (handling 200-300 flights on a given day) and 830 sectors (handling 10-40 flights). The flow control problem has to address the integration of policies across these sectors and centers, account for the complexity of the system (e.g., over 5200 public use airports and 16,000 air traffic controllers) and handle changes to the policies caused by weather patterns. Two of the fundamental problems in addressing the flow problem are: (i) modeling and simulating such a large complex system as the fidelity required to provide reliable results is difficult to achieve; and (ii) establishing the method by which the flow management is evaluated, as directly minimizing the total delay may lead to inequities towards particular regions or commercial entities. Below, we discuss how we addressed both issues, namely, we present FACET a widely used simulation tool and discuss our system evaluation function. Figure 1: FACET screenshot displaying traffic routes and air flow statistics. 2.1 FACET FACET (Future ATM Concepts Evaluation Tool), a physics based model of the US airspace was developed to accurately model the complex air traffic flow problem [4]. It is based on propagating the trajectories of proposed flights forward in time. FACET can be used to either simulate and display air traffic (a 24 hour slice with 60,000 flights takes 15 minutes to simulate on a 3 GHz, 1 GB RAM computer) or provide rapid statistics on recorded data (4D trajectories for 10,000 flights including sectors, airports, and fix statistics in 10 seconds on the same computer) [11]. FACET is extensively used by The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 343 the FAA, NASA and industry (over 40 organizations and 5000 users) [11]. FACET simulates air traffic based on flight plans and through a graphical user interface allows the user to analyze congestion patterns of different sectors and centers (Figure 1). FACET also allows the user to change the flow patterns of the aircraft through a number of mechanisms, including metering aircraft through fixes. The user can then observe the effects of these changes to congestion. In this paper, agents use FACET directly through "batch mode", where agents send scripts to FACET asking it to simulate air traffic based on metering orders imposed by the agents. The agents then produce their rewards based on receive feedback from FACET about the impact of these meterings. 2.2 System Evaluation The system performance evaluation function we select focuses on delay and congestion but does not account for fairness impact on different commercial entities. Instead it focuses on the amount of congestion in a particular sector and on the amount of measured air traffic delay. The linear combination of these two terms gives the full system evaluation function, G (z) as a function of the full system state z. More precisely, we have: where B (z) is the total delay penalty for all aircraft in the system, and C (z) is the total congestion penalty. The relative importance of these two penalties is determined by the value of α, and we explore various trade-offs based on α in Section 4. The total delay, B, is a sum of delays over a set of sectors S and is given by: where where ks, t is the number of aircraft in sector s at a particular time, τs is a predetermined time, and Θ (•) is the step function that equals 1 when its argument is greater or equal to zero, and has a value of zero otherwise. Intuitively, Bs (z) provides the total number of aircraft that remain in a sector s past a predetermined time τs, and scales their contribution to count by the amount by which they are late. In this manner Bs (z) provides a delay factor that not only accounts for all aircraft that are late, but also provides a scale to measure their "lateness". This definition is based on the assumption that most aircraft should have reached the sector by time τs and that aircraft arriving after this time are late. In this paper the value of τs is determined by assessing aircraft counts in the sector in the absence of any intervention or any deviation from predicted paths. Similarly, the total congestion penalty is a sum over the congestion penalties over the sectors of observation, S: where where a and b are normalizing constants, and cs is the capacity of sector s as defined by the FAA. Intuitively, Cs (z) penalizes a system state where the number of aircraft in a sector exceeds the FAAs official sector capacity. Each sector capacity is computed using various metrics which include the number of air traffic controllers available. The exponential penalty is intended to provide strong feedback to return the number of aircraft in a sector to below the FAA mandated capacities. 3. AGENT BASED AIR TRAFFIC FLOW The multi agent approach to air traffic flow management we present is predicated on adaptive agents taking independent actions that maximize the system evaluation function discussed above. To that end, there are four critical decisions that need to be made: agent selection, agent action set selection, agent learning algorithm selection and agent reward structure selection. 3.1 Agent Selection Selecting the aircraft as agents is perhaps the most obvious choice for defining an agent. That selection has the advantage that agent actions can be intuitive (e.g., change of flight plan, increase or decrease speed and altitude) and offer a high level of granularity, in that each agent can have its own policy. However, there are several problems with that approach. First, there are in excess of 40,000 aircraft in a given day, leading to a massively large multi-agent system. Second, as the agents would not be able to sample their state space sufficiently, learning would be prohibitively slow. As an alternative, we assign agents to individual ground locations throughout the airspace called "fixes." Each agent is then responsible for any aircraft going through its fix. Fixes offer many advantages as agents: 1. Their number can vary depending on need. The system can have as many agents as required for a given situation (e.g., agents coming "live" around an area with developing weather conditions). 2. Because fixes are stationary, collecting data and matching behavior to reward is easier. 3. Because aircraft flight plans consist of fixes, agent will have the ability to affect traffic flow patterns. 4. They can be deployed within the current air traffic routing procedures, and can be used as tools to help air traffic controllers rather than compete with or replace them. Figure 2 shows a schematic of this agent based system. Agents surrounding a congestion or weather condition affect the flow of traffic to reduce the burden on particular regions. 3.2 Agent Actions The second issue that needs to be addressed, is determining the action set of the agents. Again, an obvious choice may be for fixes to "bid" on aircraft, affecting their flight plans. Though appealing from a free flight perspective, that approach makes the flight plans too unreliable and significantly complicates the scheduling problem (e.g., arrival at airports and the subsequent gate assignment process). Instead, we set the actions of an agent to determining the separation (distance between aircraft) that aircraft have 344 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) to maintain, when going through the agent's fix. This is known as setting the "Miles in Trail" or MIT. When an agent sets the MIT value to d, aircraft going towards its fix are instructed to line up and keep d miles of separation (though aircraft will always keep a safe distance from each other regardless of the value of d). When there are many aircraft going through a fix, the effect of issuing higher MIT values is to slow down the rate of aircraft that go through the fix. By increasing the value of d, an agent can limit the amount of air traffic downstream of its fix, reducing congestion at the expense of increasing the delays upstream. Figure 2: Schematic of agent architecture. The agents corresponding to fixes surrounding a possible congestion become "live" and start setting new separation times. 3.3 Agent Learning The objective of each agent is to learn the best values of d that will lead to the best system performance, G. In this paper we assume that each agent will have a reward function and will aim to maximize its reward using its own reinforcement learner [15] (though alternatives such as evolving neuro-controllers are also effective [1]). For complex delayed-reward problems, relatively sophisticated reinforcement learning systems such as temporal difference may have to be used. However, due to our agent selection and agent action set, the air traffic congestion domain modeled in this paper only needs to utilize immediate rewards. As a consequence, simple table-based immediate reward reinforcement learning is used. Our reinforcement learner is equivalent to an e-greedy Q-learner with a discount rate of 0 [15]. At every episode an agent takes an action and then receives a reward evaluating that action. After taking action a and receiving reward R an agent updates its Q table (which contains its estimate of the value for taking that action [15]) as follows: where l is the learning rate. At every time step the agent chooses the action with the highest table value with probability 1--e and chooses a random action with probability E. In the experiments described in this paper, α is equal to 0.5 and a is equal to 0.25. The parameters were chosen experimentally, though system performance was not overly sensitive to these parameters. 3.4 Agent Reward Structure The final issue that needs to be addressed is selecting the reward structure for the learning agents. The first and most direct approach is to let each agent receive the system performance as its reward. However, in many domains such a reward structure leads to slow learning. We will therefore also set up a second set of reward structures based on agent-specific rewards. Given that agents aim to maximize their own rewards, a critical task is to create "good" agent rewards, or rewards that when pursued by the agents lead to good overall system performance. In this work we focus on difference rewards which aim to provide a reward that is both sensitive to that agent's actions and aligned with the overall system reward [2, 17, 18]. 3.4.1 Difference Rewards Consider difference rewards of the form [2, 17, 18]: where zi is the action of agent i. All the components of z that are affected by agent i are replaced with the fixed constant ci 2. In many situations it is possible to use a ci that is equivalent to taking agent i out of the system. Intuitively this causes the second term of the difference reward to evaluate the performance of the system without i and therefore D evaluates the agent's contribution to the system performance. There are two advantages to using D: First, because the second term removes a significant portion of the impact of other agents in the system, it provides an agent with a "cleaner" signal than G. This benefit has been dubbed "learnability" (agents have an easier time learning) in previous work [2, 17]. Second, because the second term does not depend on the actions of agent i, any action by agent i that improves D, also improves G. This term which measures the amount of alignment between two rewards has been dubbed "factoredness" in previous work [2, 17]. 3.4.2 Estimates of Difference Rewards Though providing a good compromise between aiming for system performance and removing the impact of other agents from an agent's reward, one issue that may plague D is computational cost. Because it relies on the computation of the counterfactual term G (z--zi + ci) (i.e., the system performance without agent i) it may be difficult or impossible to compute, particularly when the exact mathematical form of G is not known. Let us focus on G functions in the following form: where each fi is an unknown non-linear function. We assume that we can sample values from f (z), enabling us to compute G, but that we cannot sample from each fi (zi). 2This notation uses zero padding and vector addition rather than concatenation to form full state vectors from partial state vectors. The vector "zi" in our notation would be ziei in standard vector notation, where ei is a vector with a value of 1 in the ith component and is zero everywhere else. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 345 In addition, we assume that Gf is much easier to compute than f (z), or that we may not be able to even compute f (z) directly and must sample it from a "black box" computation. This form of G matches our system evaluation in the air traffic domain. When we arrange agents so that each aircraft is typically only affected by a single agent, each agent's impact of the counts of the number of aircraft in a sector, kt, s, will be mostly independent of the other agents. These values of kt, s are the "f (z) s" in our formulation and the penalty functions form "Gf." Note that given aircraft counts, the penalty functions (Gf) can be easily computed in microseconds, while aircraft counts (f) can only be computed by running FACET taking on the order of seconds. To compute our counterfactual G (z--zi + ci) we need to compute: Unfortunately, we cannot compute this directly as the values of fi (zi) are unknown. However, if agents take actions independently (it does not observe how other agents act before taking its own action) we can take advantage of the linear form of f (z) in the fis with the following equality: where E (f--i (z--i) Izi) is the expected value of all of the f s other than fi given the value of zi and E (f--i (z--i) Ici) is the expected value of all of the f s other than fi given the value of zi is changed to ci. We can then estimate f (z--zi + ci): = f (z)--E (f (z) Izi) + E (f (z) Ici). Therefore we can evaluate Di = G (z)--G (z--zi + ci) as: leaving us with the task of estimating the values of E (f (z) Izi) and E (f (z) Ici)). These estimates can be computed by keeping a table of averages where we average the values of the observed f (z) for each value of zi that we have seen. This estimate should improve as the number of samples increases. To improve our estimates, we can set ci = E (z) and if we make the mean squared approximation of f (E (z)); z E (f (z)) then we can estimate G (z)--G (z--zi + ci) as: This formulation has the advantage in that we have more samples at our disposal to estimate E (f (z)) than we do to estimate E (f (z) Ici)). 4. SIMULATION RESULTS In this paper we test the performance of our agent based air traffic optimization method on a series of simulations using the FACET air traffic simulator. In all experiments we test the performance of five different methods. The first method is Monte Carlo estimation, where random policies are created, with the best policy being chosen. The other four methods are agent based methods where the agents are maximizing one of the following rewards: 1. The system reward, G (z), as define in Equation 1. 2. The difference reward, Di (z), assuming that agents can calculate counterfactuals. 3. Estimation to the difference reward, Dest1 i (z), where agents estimate the counterfactual using E (f (z) Izi) and E (f (z) Ici). 4. Estimation to the difference reward, Dest2 i (z), where agents estimate the counterfactual using E (f (z) Izi) and E (f (z)). These methods are first tested on an air traffic domain with 300 aircraft, where 200 of the aircraft are going through a single point of congestion over a four hour simulation. Agents are responsible for reducing congestion at this single point, while trying to minimize delay. The methods are then tested on a more difficult problem, where a second point of congestion is added with the 100 remaining aircraft going through this second point of congestion. In all experiments the goal of the system is to maximize the system performance given by G (z) with the parameters, a = 50, b = 0.3, τs1 equal to 200 minutes and τs1 equal to 175 minutes. These values of τ are obtained by examining the time at which most of the aircraft leave the sectors, when no congestion control is being performed. Except where noted, the trade-off between congestion and lateness, α is set to 0.5. In all experiments to make the agent results comparable to the Monte Carlo estimation, the best policies chosen by the agents are used in the results. All results are an average of thirty independent trials with the differences in the mean (σ / -, / n) shown as error bars, though in most cases the error bars are too small to see. Figure 3: Performance on single congestion problem, with 300 Aircraft, 20 Agents and α = .5. 4.1 Single Congestion In the first experiment we test the performance of the five methods when there is a single point of congestion, with twenty agents. This point of congestion is created by setting up a series of flight plans that cause the number of aircraft in 346 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) the sector of interest to be significantly more than the number allowed by the FAA. The results displayed in Figures 3 and 4 show the performance of all five algorithms on two different system evaluations. In both cases, the agent based methods significantly outperform the Monte Carlo method. This result is not surprising since the agent based methods intelligently explore their space, where as the Monte Carlo method explores the space randomly. Figure 4: Performance on single congestion problem, with 300 Aircraft, 20 Agents and α = .75. Among the agent based methods, agents using difference rewards perform better than agents using the system reward. Again this is not surprising, since with twenty agents, an agent directly trying to maximize the system reward has difficulty determining the effect of its actions on its own reward. Even if an agent takes an action that reduces congestion and lateness, other agents at the same time may take actions that increase congestion and lateness, causing the agent to wrongly believe that its action was poor. In contrast agents using the difference reward have more influence over the value of their own reward, therefore when an agent takes a good action, the value of this action is more likely to be reflected in its reward. This experiment also shows that estimating the difference reward is not only possible, but also quite effective, when the true value of the difference reward cannot be computed. While agents using the estimates do not achieve as high of results as agents using the true difference reward, they still perform significantly better than agents using the system reward. Note, however, that the benefit of the estimated difference rewards are only present later in learning. Earlier in learning, the estimates are poor, and agents using the estimated difference rewards perform no better then agents using the system reward. 4.2 Two Congestions In the second experiment we test the performance of the five methods on a more difficult problem with two points of congestion. On this problem the first region of congestion is the same as in the previous problem, and the second region of congestion is added in a different part of the country. The second congestion is less severe than the first one, so agents have to form different policies depending which point of congestion they are influencing. Figure 6: Performance on two congestion problem, with 300 Aircraft, 50 Agents and α = .5. The results displayed in Figure 5 show that the relative performance of the five methods is similar to the single congestion case. Again agent based methods perform better than the Monte Carlo method and the agents using difference rewards perform better than agents using the system reward. To verify that the performance improvement of our methods is maintained when there are a different number of agents, we perform additional experiments with 50 agents. The results displayed in Figure 6 show that indeed the relative performances of the methods are comparable when the number of agents is increased to 50. Figure 7 shows scaling results and demonstrates that the conclusions hold over a wide range of number of agents. Agents using Dest2 perform slightly better than agents using Dest1 in all cases but for 50 agents. This slight advantage stems from Dest2 providing the agents with a cleaner signal, since its estimate uses more data points. 4.3 Penalty Tradeoffs The system evaluation function used in the experiments is G (z) = − ((1 − α) D (z) + αC (z)), which comprises of penalties for both congestion and lateness. This evaluation function Figure 5: Performance on two congestion problem, with 300 Aircraft, 20 Agents and α = .5. Figure 7: Impact of number of agents on system performance. Two congestion problem, with 300 Aircraft and α = .5. forces the agents to tradeoff these relative penalties depending on the value of α. With high α the optimization focuses on reducing congestion, while with low α the system focuses on reducing lateness. To verify that the results obtained above are not specific to a particular value of α, we repeat the experiment with 20 agents for α = .75. Figure 8 shows that qualitatively the relative performance of the algorithms remain the same. Next, we perform a series of experiments where α ranges from 0.0 to 1.0. Figure 9 shows the results which lead to three interesting observations: • First, there is a zero congestion penalty solution. This solution has agents enforce large MIT values to block all air traffic, which appears viable when the system evaluation does not account for delays. All algorithms find this solution, though it is of little interest in practice due to the large delays it would cause. • Second, if the two penalties were independent, an optimal solution would be a line from the two end points. Therefore, unless D is far from being optimal, the two penalties are not independent. Note that for α = 0.5 the difference between D and this hypothetical line is as large as it is anywhere else, making α = 0.5 a reasonable choice for testing the algorithms in a difficult setting. • Third, Monte Carlo and G are particularly poor at handling multiple objectives. For both algorithms, the performance degrades significantly for mid-ranges of α. 4.4 Computational Cost The results in the previous section show the performance of the different algorithms after a specific number of episodes. Those results show that D is significantly superior to the other algorithms. One question that arises, though, is what computational overhead D puts on the system, and what results would be obtained if the additional computational expense of D is made available to the other algorithms. The computation cost of the system evaluation, G (Equation 1) is almost entirely dependent on the computation of Figure 9: Tradeoff Between Objectives on two congestion problem, with 300 Aircraft and 20 Agents. Note that Monte Carlo and G are particularly bad at handling multiple objectives. the airplane counts for the sectors kt, s, which need to be computed using FACET. Except when D is used, the values of k are computed once per episode. However, to compute the counterfactual term in D, if FACET is treated as a "black box", each agent would have to compute their own values of k for their counterfactual resulting in n + 1 computations of k per episode. While it may be possible to streamline the computation of D with some knowledge of the internals of FACET, given the complexity of the FACET simulation, it is not unreasonable in this case to treat it as a black box. Table 1 shows the performance of the algorithms after 2100 G computations for each of the algorithms for the simulations presented in Figure 5 where there were 20 agents, 2 congestions and α = .5. All the algorithms except the fully computed D reach 2100 k computations at time step 2100. D however computes k once for the system, and then once for each agent, leading to 21 computations per time step. It therefore reaches 2100 computations at time step 100. We also show the results of the full D computation at t = 2100, which needs 44100 computations of k as D44K. Figure 8: Performance on two congestion problem, with 300 Aircraft, 20 Agents and α = .75. 348 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: System Performance for 20 Agents, 2 congestions and α = .5, after 2100 G evaluations (except Although D44K provides the best result by a slight margin, it is achieved at a considerable computational cost. Indeed, the performance of the two D estimates is remarkable in this case as they were obtained with about twenty times fewer computations of k. Furthermore, the two D estimates, significantly outperform the full D computation for a given number of computations of k and validate the assumptions made in Section 3.4.2. This shows that for this domain, in practice it is more fruitful to perform more learning steps and approximate D, than few learning steps with full D computation when we treat FACET as a black box.
Distributed Agent-Based Air Traffic Flow Management ABSTRACT Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today. The FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars. Finding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume. This problem is particularly complex as it requires the integration and/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace). In this paper we use FACET--an air traffic flow simulator developed at NASA and used extensively by the FAA and industry--to test a multi-agent algorithm for traffic flow management. An agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix. Agents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion. Our FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation). 1. INTRODUCTION The efficient, safe and reliable management of our ever increasing air traffic is one of the fundamental challenges facing the aerospace industry today. On a typical day, more than 40,000 commercial flights operate within the US airspace [14]. In order to efficiently and safely route this air traffic, current traffic flow control relies on a centralized, hierarchical routing strategy that performs flow projections ranging from one to six hours. As a consequence, the system is slow to respond to developing weather or airport conditions leading potentially minor local delays to cascade into large regional congestions. In 2005, weather, routing decisions and airport conditions caused 437,667 delays, accounting for 322,272 hours of delays. The total cost of these delays was estimated to exceed three billion dollars by industry [7]. Furthermore, as the traffic flow increases, the current procedures increase the load on the system, the airports, and the air traffic controllers (more aircraft per region) without providing any of them with means to shape the traffic patterns beyond minor reroutes. The Next Generation Air Transportation Systems (NGATS) initiative aims to address this issues and, not only account for a threefold increase in traffic, but also for the increasing heterogeneity of aircraft and decreasing restrictions on flight paths. Unlike many other flow problems where the increasing traffic is to some extent absorbed by improved hardware (e.g., more servers with larger memories and faster CPUs for internet routing) the air traffic domain needs to find mainly algorithmic solutions, as the infrastructure (e.g., number of the airports) will not change significantly to impact the flow problem. There is therefore a strong need to explore new, distributed and adaptive solutions to the air flow control problem. An adaptive, multi-agent approach is an ideal fit to this naturally distributed problem where the complex interaction among the aircraft, airports and traffic controllers renders a pre-determined centralized solution severely suboptimal at the first deviation from the expected plan. Though a truly distributed and adaptive solution (e.g., free flight where aircraft can choose almost any path) offers the most potential in terms of optimizing flow, it also provides the most radical departure from the current system. As a consequence, a shift to such a system presents tremendous difficulties both in terms of implementation (e.g., scheduling and airport capacity) and political fallout (e.g., impact on air traffic controllers). In this paper, we focus on agent based system that can be implemented readily. In this approach, we assign an agent to a "fix," a specific location in 2D. Because aircraft flight plans consist of a sequence of fixes, this representation allows localized fixes (or agents) to have direct impact on the flow of air traffic1. In this approach, the agents' actions are to set the separation that approaching aircraft are required to keep. This simple agent-action pair allows the agents to slow down or speed up local traffic and allows agents to a have significant impact on the overall air traffic flow. Agents learn the most appropriate separation for their location using a reinforcement learning (RL) algorithm [15]. In a reinforcement learning approach, the selection of the agent reward has a large impact on the performance of the system. In this work, we explore four different agent reward functions, and compare them to simulating various changes to the system and selecting the best solution (e.g, equivalent to a Monte-Carlo search). The first explored reward consisted of the system reward. The second reward was a personalized agent reward based on collectives [3, 17, 18]. The last two rewards were personalized rewards based on estimations to lower the computational burden of the reward computation. All three personalized rewards aim to align agent rewards with the system reward and ensure that the rewards remain sensitive to the agents' actions. Previous work in this domain fell into one of two distinct categories: The first principles based modeling approaches used by domain experts [5, 8, 10, 13] and the algorithmic approaches explored by the learning and/or agents community [6, 9, 12]. Though our approach comes from the second category, we aim to bridge the gap by using FACET to test our algorithms, a simulator introduced and widely used (i.e., over 40 organizations and 5000 users) by work in the first category [4, 11]. The main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and test that algorithm using FACET. In Section 2, we describe the air traffic flow problem and the simulation tool, FACET. In Section 3, we present the agent-based approach, focusing on the selection of the agents and their action space along with the agents' learning algorithms and reward structures. In Section 4 we present results in domains with one and two congestions, explore different trade-offs of the system objective function, discuss the scaling properties of the different agent rewards and discuss the computational cost of achieving certain levels of performance. Finally, in Section 5, we discuss the implications of these results and provide and map the required work to enable the FAA to reach its stated goal of increasing the traffic volume by threefold. 2. AIR TRAFFIC FLOW MANAGEMENT 2.1 FACET The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 343 2.2 System Evaluation 3. AGENT BASED AIR TRAFFIC FLOW 3.1 Agent Selection 3.2 Agent Actions 344 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.3 Agent Learning 3.4 Agent Reward Structure 3.4.1 Difference Rewards 3.4.2 Estimates of Difference Rewards The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 345 4. SIMULATION RESULTS 4.1 Single Congestion 346 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2 Two Congestions 4.3 Penalty Tradeoffs 4.4 Computational Cost 348 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Distributed Agent-Based Air Traffic Flow Management ABSTRACT Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today. The FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars. Finding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume. This problem is particularly complex as it requires the integration and/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace). In this paper we use FACET--an air traffic flow simulator developed at NASA and used extensively by the FAA and industry--to test a multi-agent algorithm for traffic flow management. An agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix. Agents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion. Our FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation). 1. INTRODUCTION The efficient, safe and reliable management of our ever increasing air traffic is one of the fundamental challenges facing the aerospace industry today. In order to efficiently and safely route this air traffic, current traffic flow control relies on a centralized, hierarchical routing strategy that performs flow projections ranging from one to six hours. As a consequence, the system is slow to respond to developing weather or airport conditions leading potentially minor local delays to cascade into large regional congestions. In 2005, weather, routing decisions and airport conditions caused 437,667 delays, accounting for 322,272 hours of delays. The total cost of these delays was estimated to exceed three billion dollars by industry [7]. Furthermore, as the traffic flow increases, the current procedures increase the load on the system, the airports, and the air traffic controllers (more aircraft per region) without providing any of them with means to shape the traffic patterns beyond minor reroutes. There is therefore a strong need to explore new, distributed and adaptive solutions to the air flow control problem. An adaptive, multi-agent approach is an ideal fit to this naturally distributed problem where the complex interaction among the aircraft, airports and traffic controllers renders a pre-determined centralized solution severely suboptimal at the first deviation from the expected plan. Though a truly distributed and adaptive solution (e.g., free flight where aircraft can choose almost any path) offers the most potential in terms of optimizing flow, it also provides the most radical departure from the current system. As a consequence, a shift to such a system presents tremendous difficulties both in terms of implementation (e.g., scheduling and airport capacity) and political fallout (e.g., impact on air traffic controllers). In this paper, we focus on agent based system that can be implemented readily. In this approach, we assign an agent to a "fix," a specific location in 2D. Because aircraft flight plans consist of a sequence of fixes, this representation allows localized fixes (or agents) to have direct impact on the flow of air traffic1. In this approach, the agents' actions are to set the separation that approaching aircraft are required to keep. This simple agent-action pair allows the agents to slow down or speed up local traffic and allows agents to a have significant impact on the overall air traffic flow. Agents learn the most appropriate separation for their location using a reinforcement learning (RL) algorithm [15]. In a reinforcement learning approach, the selection of the agent reward has a large impact on the performance of the system. In this work, we explore four different agent reward functions, and compare them to simulating various changes to the system and selecting the best solution (e.g, equivalent to a Monte-Carlo search). The first explored reward consisted of the system reward. The second reward was a personalized agent reward based on collectives [3, 17, 18]. The last two rewards were personalized rewards based on estimations to lower the computational burden of the reward computation. All three personalized rewards aim to align agent rewards with the system reward and ensure that the rewards remain sensitive to the agents' actions. The main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and test that algorithm using FACET. In Section 2, we describe the air traffic flow problem and the simulation tool, FACET. In Section 3, we present the agent-based approach, focusing on the selection of the agents and their action space along with the agents' learning algorithms and reward structures. In Section 4 we present results in domains with one and two congestions, explore different trade-offs of the system objective function, discuss the scaling properties of the different agent rewards and discuss the computational cost of achieving certain levels of performance. Finally, in Section 5, we discuss the implications of these results and provide and map the required work to enable the FAA to reach its stated goal of increasing the traffic volume by threefold.
I-75
Hypotheses Refinement under Topological Communication Constraints
We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment. Upon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations). However, we further assume that communication opportunities are severely constrained and change dynamically. In this paper, we mostly investigate the convergence of such systems towards global consistency. We first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange. As this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange. We study a critical situation involving a number of agents aiming at escaping from a burning building. The results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.
[ "multiag system", "favour hypothesi", "global consist", "consist", "observ set", "time point sequenc", "bound percept", "tempor path", "topolog constraint", "hypothesi exchang protocol", "bilater exchang", "mutual consist", "context request step", "inter-agent commun", "negoti and argument", "agent commun languag and protocol" ]
[ "P", "P", "P", "P", "R", "M", "M", "U", "R", "R", "M", "M", "M", "M", "M", "M" ]
Hypotheses Refinement under Topological Communication Constraints ∗ Gauvain Bourgne, Gael Hette, Nicolas Maudet, and Suzanne Pinson LAMSADE, Univ.. Paris-Dauphine, France {bourgne,hette,maudet,pinson}@lamsade. dauphine.fr ABSTRACT We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment. Upon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations). However, we further assume that communication opportunities are severely constrained and change dynamically. In this paper, we mostly investigate the convergence of such systems towards global consistency. We first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange. As this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange. We study a critical situation involving a number of agents aiming at escaping from a burning building. The results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems General Terms Theory, Experimentation 1. INTRODUCTION We consider a multiagent system where each (distributed) agent locally perceives its environment, and we assume that some unexpected event occurs in that system. If each agent computes only locally its favoured hypothesis, it is only natural to assume that agents will seek to coordinate and refine their hypotheses by confronting their observations with other agents. If, in addition, the communication opportunities are severely constrained (for instance, agents can only communicate when they are close enough to some other agent), and dynamically changing (for instance, agents may change their locations), it becomes crucial to carefully design protocols that will allow agents to converge to some desired state of global consistency. In this paper we exhibit some sufficient conditions on the system dynamics and on the protocol/strategy structures that allow to guarantee that property, and we experimentally study some contexts where (some of) these assumptions are relaxed. While problems of diagnosis are among the venerable classics in the AI tradition, their multiagent counterparts have much more recently attracted some attention. Roos and colleagues [8, 9] in particular study a situation where a number of distributed entities try to come up with a satisfying global diagnosis of the whole system. They show in particular that the number of messages required to establish this global diagnosis is bound to be prohibitive, unless the communication is enhanced with some suitable protocol. However, they do not put any restrictions on agents'' communication options, and do not assume either that the system is dynamic. The benefits of enhancing communication with supporting information to make convergence to a desired global state of a system more efficient has often been put forward in the literature. This is for instance one of the main idea underlying the argumentation-based negotiation approach [7], where the desired state is a compromise between agents with conflicting preferences. Many of these works however make the assumption that this approach is beneficial to start with, and study the technical facets of the problem (or instead emphasize other advantages of using argumentation). Notable exceptions are the works of [3, 4, 2, 5], which studied in contexts different from ours the efficiency of argumentation. The rest of the paper is as follows. Section 2 specifies the basic elements of our model, and Section 3 goes on to presenting the different protocols and strategies used by the agents to exchange hypotheses and observations. We put special attention at clearly emphasizing the conditions on the system dynamics and protocols/strategies that will be exploited in the rest of the paper. Section 4 details one of 998 978-81-904262-7-5 (RPS) c 2007 IFAAMAS the main results of the paper, namely the fact that under the aforementioned conditions, the constraints that we put on the topology will not prevent the convergence of the system towards global consistency, at the condition that no agent ever gets completely lost forever in the system, and that unlimited time is allowed for computation and argument exchange. While the conditions on protocols and strategies are fairly mild, it is also clear that these system requirements look much more problematic, even frankly unrealistic in critical situations where distributed approaches are precisely advocated. To get a clearer picture of the situation induced when time is a critical factor, we have set up an experimental framework that we introduce and discuss in Section 5. The critical situation involves a number of agents aiming at escaping from a burning building. The results reported here show that the effectiveness of argument exchange crucially depends upon the nature of the building, and provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. 2. BASIC NOTIONS We start by defining the basic elements of our system. Environment Let O be the (potentially infinite) set of possible observations. We assume the sensors of our agents to be perfect, hence the observations to be certain. Let H be the set of hypotheses, uncertain and revisable. Let Cons(h, O) be the consistency relation, a binary relation between a hypothesis h ∈ H and a set of observations O ⊆ O. In most cases, Cons will refer to classical consistency relation, however, we may overload its meaning and add some additional properties to that relation (in which case we will mention it). The environment may include some dynamics, and change over the course of time. We define below sequences of time points to deal with it: Definition 1 (Sequence of time points). A sequence of time points t1, t2, ... , tn from t is an ordered set of time points t1, t2, ... , tn such that t1 ≥ t and ∀i ∈ [1, n − 1], ti+1 ≥ ti. Agent We take a system populated by n agents a1, ... , an. Each agent is defined as a tuple F, Oi, hi , where: • F, the set of facts, common knowledge to all agents. • Oi ∈ 2O , the set of observations made by the agent so far. We assume a perfect memory, hence this set grows monotonically. • hi ∈ H, the favourite hypothesis of the agent. A key notion governing the formation of hypotheses is that of consistency, defined below: Definition 2 (Consistency). We say that: • An agent is consistent (Cons(ai)) iff Cons(hi, Oi) (that is, its hypothesis is consistent with its observation set). • An agent ai consistent with a partner agent aj iff Cons(ai) and Cons(hi, Oj) (that is, this agent is consistent and its hypothesis can explain the observation set of the other agent). • Two agents ai and aj are mutually consistent (MCons(ai, aj)) iff Cons(ai, aj) and Cons(aj, ai). • A system is consistent iff ∀(i, j)∈[1, n]2 it is the case that MCons(ai, aj). To ensure its consistency, each agent is equipped with an abstract reasoning machinery that we shall call the explanation function Eh. This (deterministic) function takes a set of observation and returns a single prefered hypothesis (2O → H). We assume h = Eh(O) to be consistent with O by definition of Eh, so using this function on its observation set to determine its favourite hypothesis is a sure way for the agent to achieve consistency. Note however that an hypothesis does not need to be generated by Eh to be consistent with an observation set. As a concrete example of such a function, and one of the main inspiration of this work, one can cite the Theorist reasoning system [6] -as long as it is coupled with a filter selecting a single prefered theory among the ones initially selected by Theorist. Note also that hi may only be modified as a consequence of the application Eh. We refer to this as the autonomy of the agent: no other agent can directly impose a given hypothesis to an agent. As a consequence, only a new observation (being it a new perception, or an observation communicated by a fellow agent) can result in a modification of its prefered hypothesis hi (but not necessarily of course). We finally define a property of the system that we shall use in the rest of the paper: Definition 3 (Bounded Perceptions). A system involves a bounded perception for agents iff ∃n0 s.t. ∀t| ∪N i=1 Oi| ≤ n0. (That is, the number of observations to be made by the agents in the system is not infinite.) Agent Cycle Now we need to see how these agents will evolve and interact in their environment. In our context, agents evolve in a dynamic environment, and we classicaly assume the following system cycle: 1. Environment dynamics: the environment evolves according to the defined rules of the system dynamics. 2. Perception step : agents get perceptions from the environment. These perceptions are typically partial (e.g. the agent can only see a portion of a map). 3. Reasoning step: agents compare perception with predictions, seek explanations for (potential) difference(s), refine their hypothesis, draw new conclusions. 4. Communication step: agents can communicate hypotheses and observations with other agents through a defined protocol. Any agent can only be involved in one communication with another agent by step. 5. Action step: agents do some practical reasoning using the models obtained from the previous steps and select an action. They can then modify the environment by executing it. The communication of the agents will be further constrained by topological consideration. At a given time, an agent will only be able to communicate with a number of neighbours. Its connexions with these others agents may The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 999 evolve with its situation in the environment. Typically, an agent can only communicate with agents that it can sense, but one could imagine evolving topological constraints on communication based on a network of communications between agents where the links are not always active. Communication In our system, agents will be able to communicate with each other. However, due to the aforementionned topological constraints, they will not be able to communicate with any agents at anytime. Who an agent can communicate with will be defined dynamically (for instance, this can be a consequence of the agents being close enough to get in touch). We will abstractly denote by C(ai, aj, t) the communication property, in other words, the fact that agents ai and aj can communicate at time t (note that this relation is assumed to be symetric, but of course not transitive). We are now in a position to define two essential properties of our system. Definition 4 (Temporal Path). There exists a temporal communication path at horizon tf (noted Ltf (aI , aJ )) between ai and aj iff there exists a sequence of time points t1, t2, ... , tn from tf and a sequence of agents k1, k2, ... , kn s.t. (i) C(aI , ak1 , t1), (ii) C(akn , aJ , tn+1), (iii) ∀i ∈ [1, n], C(aki , aki+1 , ti) Intuitively, what this property says is that it is possible to find a temporal path in the future that would allow to link agent ai and aj via a sequence of intermediary agents. Note that the time points are not necessarily successive, and that the sequence of agents may involve the same agents several times. Definition 5 (Temporal Connexity). A system is temporaly connex iff ∀t ∀(i, j)∈[1, n]2 Lt(ai, aj) In short, a temporaly connex system guarantees that any agent will be able to communicate with any other agents, no matter how long it might take to do so, at any time. To put it another way, it is never the case that an agent will be isolated for ever from another agent of the system. We will next discuss the detail of how communication concretely takes place in our system. Remember that in this paper, we only consider the case of bilateral exchanges (an agent can only speak to a single other agent), and that we also assume that any agent can only engage in a single exchange in a given round. 3. PROTOCOLS AND STRATEGIES In this section, we discuss the requirements of the interaction protocols that govern the exchange of messages between agents, and provide some example instantiation of such protocols. To clarify the presentation, we distinguish two levels: the local level, which is concerned with the regulation of bilateral exchanges; and the global level,which essentially regulates the way agents can actually engage into a conversation. At each level, we separate what is specified by the protocol, and what is left to agents'' strategies. Local Protocol and Strategies We start by inspecting local protocols and strategies that will regulate the communication between the agents of the system. As we limit ourselves to bilateral communication, these protocols will simply involve two agents. Such protocol will have to meet one basic requirement to be satisfying. • consistency (CONS)- a local protocol has to guarantee the mutual consistency of agents upon termination (which implies termination of course). Figure 1: A Hypotheses Exchange Protocol [1] One example such protocol is the protocol described in [1] that is pictured in Fig. 1. To further illustrate how such protocol can be used by agents, we give some details on a possible strategy: upon receiving a hypothesis h1 (propose(h1) or counterpropose(h1)) from a1, agent a2 is in state 2 and has the following possible replies: counterexample (if the agent knows an example contradicting the hypothesis, or not explained by this hypothesis), challenge (if the agents lacks evidence to accept this hypothesis), counterpropose (if the agent agrees with the hypothesis but prefers another one), or accept (if it is indeed as good as its favourite hypothesis). This strategy guarantees, among other properties, the eventual mutual logical consistency of the involved agents [1]. Global Protocol The global protocol regulates the way bilateral exchanges will be initiated between agents. At each turn, agents will concurrently send one weighted request to communicate to other agents. This weight is a value measuring the agent``s willingness to converse with the targeted agent (in practice, this can be based on different heuristics, but we shall make some assumptions on agents'' strategies, see below). Sending such a request is a kind of conditional commitment for the agent. An agent sending a weighted request commits to engage in conversation with the target if he does not receive and accept himself another request. Once all request have been received, each agent replies with either an acccept or a reject. By answering with an accept, an agent makes a full commitment to engage in conversation with the sender. Therefore, it can only send one accept in a given round, as an agent can only participate in one conversation per time step. When all response have been received, each agent receiving an accept can either initiate a conversation using the local protocol or send a cancel if it has accepted another request. At the end of all the bilateral exchanges, the agents engaged in conversation are discarded from the protocol. Then each of the remaining agents resends a request and the process iterates until no more requests are sent. Global Strategy We now define four requirements for the strategies used by agents, depending on their role in the protocol: two are concerned with the requestee role (how to decide who the 1000 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) agent wishes to communicate with?) , the other two with the responder role (how to decide which communication request to accept or not?) . • Willingness to solve inconsistancies (SOLVE)-agents want to communicate with any other agents unless they know they are mutually consistent. • Focus on solving inconsistencies (FOCUS)-agents do not request communication with an agent with whom they know they are mutually consistent. • Willingness to communicate (COMM)-agents cannot refuse a weighted communication request, unless they have just received or send a request with a greater weight. • Commitment to communication request (REQU)agents cannot accept a weighted communication request if they have themselves sent a communication request with a greater weight. Therefore, they will not cancel their request unless they have received a communicational request with greater weight. Now the protocol structure, together with the properties COMM+REQU, ensure that a request can only be rejected if its target agent engages in communication with another agent. Suppose indeed that agent ai wants to communicate with aj by sending a request with weight w. COMM guarantees that an agent receiving a weighted request will either accept this communication, accept a communication with a greater weight or wait for the answer to a request with a greater weight. This ensures that the request with maximal weight will be accepted and not cancelled (as REQU ensures that an agent sending a request can only cancel it if he accepts another request with greater weight). Therefore at least two agents will engage in conversation per round of the global protocol. As the protocol ensures that ai can resend its request while aj is not engaged in a conversation, there will be a turn in which aj must engage in a conversation, either with ai or another agent. These requirements concern request sending and acceptation, but agents also need some strategy of weight attribution. We describe below an altruist strategy, used in our experiments. Being cooperative, an agent may want to know more of the communication wishes of other agents in order to improve the overall allocation of exchanges to agents. A context request step is then added to the global protocol. Before sending their chosen weighted request, agents attribute a weight to all agents they are prepared to communicate with, according to some internal factors. In the simplest case, this weight will be 1 for all agent with whom the agent is not sure of being mutually consistent (ensuring SOLVE), other agent being not considered for communication (ensuring FOCUS). The agent then sends a context request to all agents with whom communication is considered. This request also provides information about the sender (list of considered communications along with their weight). After reception of all the context requests, agents will either reply with a deny, iff they are already engaged in a conversation (in which case, the requesting agent will not consider communication with them anymore in this turn), or an inform giving the requester information about the requests it has sent and received. When all replies have been received, each agent can calculate the weight of all requests concerning it. It does so by substracting from the weight of its request the weight of all requests concerning either it or its target (that is, the final weight of the request from ai to aj is Wi,j = wi,j +wj,i − ( P k∈R(i)−{j} wi,k + P k∈S(i)−{j} wk,i + P k∈R(j)−{i} wj,k + P k∈S(j)−{i} wk,j) where wi,j is the weight of the request of ai to aj, R(i) is the set of indice of agents having received a request from ai and S(i) is the set of indice of agents having send a request to ai). It then finally sends a weighted request to the agents who maximise this weight (or wait for a request) as described in the global protocol. 4. (CONDITIONAL) CONVERGENCE TO GLOBAL CONSISTENCY In this section we will show that the requirements regarding protocols and strategies just discussed will be sufficient to ensure that the system will eventually converge towards global consistency, under some conditions. We first show that, if two agents are not mutually consistent at some time, then there will be necessarily a time in the future such that an agent will learn a new observation, being it because it is new for the system, or by learning it from another agent. Lemma 1. Let S be a system populated by n agents a1, a2, ..., an, temporaly connex, and involving bounded perceptions for these agents. Let n1 be the sum of cardinalities of the intersection of pairwise observation sets. (n1 = P (i,j)∈[1,n]2 |Oi ∩ Oj|) Let n2 be the cardinality of the union of all agents'' observations sets. (n2 = | ∪N i=1 Oi|). If ¬MCons(ai, aj) at time t0, there is necessarily a time t > t0 s.t. either n1 or n2 will increase. Proof. Suppose that there exist a time t0 and indices (i, j) s.t. ¬MCons(ai, aj). We will use mt0 = P (k,l)∈[1,n]2 εComm(ak, al, t0) where εComm(ak, al, t0) = 1 if ak and al have communicated at least once since t0, and 0 otherwise. Temporal connexity guarantees that there exist t1, ..., tm+1 and k1, ..., km s.t. C(ai, ak1 , t1), C(akm , aj, tm+1), and ∀p ∈ [1, m], C(akp , akp+1 , tp). Clearly, if MCons(ai, ak1 ), MCons(akm , aj) and ∀p, MCons(akp , akp+1 ), we have MCons(ai, aj) which contradicts our hypothesis (MCons being transitive, MCons(ai, ak1 )∧MCons(ak1 , ak2 ) implies that MCons(ai, ak2 ) and so on till MCons(ai, akm )∧ MCons(akm , aj) which implies MCons(ai, aj) ). At least two agents are then necessarily inconsistent (¬MCons(ai, ak1 ), or ¬MCons(akm , aj), or ∃p0 t.q. ¬MCons(akp0 , akp0+1 )). Let ak and al be these two neighbours at a time t > t0 1 . The SOLVE property ensures that either ak or al will send a communication request to the other agent at time t . As shown before, this in turn ensures that at least one of these agents will be involved in a communication. Then there are two possibilities: (case i) ak and al communicate at time t . In this case, we know that ¬MCons(ak, al). This and the CONS property ensures that at least one of the agents must change its 1 Strictly speaking, the transitivity of MCons only ensure that ak and al are inconsistent at a time t ≥ t0 that can be different from the time t at which they can communicate. But if they become consistent between t and t (or inconsistent between t and t ), it means that at least one of them have changed its hypothesis between t and t , that is, after t0. We can then apply the reasoning of case iib. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1001 hypothesis, which in turn, since agents are autonomous, implies at least one exchange of observation. But then |Ok ∩Ol| is bound to increase: n1(t ) > n1(t0). (case ii) ak communicates with ap at time t . We then have again two possibilities: (case iia) ak and ap did not communicate since t0. But then εComm(ak, ap, t0) had value 0 and takes value 1. Hence mt0 increases. (case iib) ak and ap did communicate at some time t0 > t0. The CONS property of the protocol ensures that MCons(ak, ap) at that time. Now the fact that they communicate and FOCUS implies that at least one of them did change its hypothesis in the meantime. The fact that agents are autonomous implies in turn that a new observation (perceived or received from another agent) necessarily provoked this change. The latter case would ensure the existence of a time t > t0 and an agent aq s.t. either |Op ∩Oq| or |Ok ∩Oq| increases of 1 at that time (implying n1(t ) > n1(t0)). The former case means that the agent gets a new perception o at time t . If that observation was unknown in the system before, then n2(t ) > n2(t0). If some agent aq already knew this observation before, then either Op ∩ Oq or Ok ∩ Oq increases of 1 at time t (which implies that n1(t ) > n1(t0)). Hence, ¬MCons(ai, aj) at time t0 guarantees that, either: −∃t > t0 t.q. n1(t ) > n1(t0); or −∃t > t0 t.q. n2(t ) > n2(t0); or −∃t > t0 t.q. mt0 increases of 1 at time t . By iterating the reasoning with t (but keeping t0 as the time reference for mt0 ), we can eliminate the third case (mt0 is integer and bounded by n2 , which means that after a maximum of n2 iterations, we necessarily will be in one of two other cases.) As a result, we have proven that if ¬MCons(ai, aj) at time t0, there is necessarily a time t s.t. either n1 or n2 will increase. Theorem 1 (Global consistency). Let S be a system populated by n agents a1, a2, ..., an, temporaly connex, and involving bounded perceptions for these agents. Let Cons(ai, aj) be a transitive consistency property. Then any protocol and strategies satisfying properties CONS, SOLVE, FOCUS, COMM and REQU guarantees that the system will converge towards global consistency. Proof. For the sake of contradiction, let us assume ∃I, J ∈ [1, N] s.t. ∀t, ∃t0 > t, t.q. ¬Cons(aI , aJ , t0). Using the lemma, this implies that ∃t > t0 s.t. either n1(t ) > n1(t0) or n2(t ) > n2(t0). But we can apply the same reasoning taking t = t , which would give us t1 > t > t0 s.t. ¬Cons(aI , aJ , t1), which gives us t > t1 s.t. either n1(t ) > n1(t1) or n2(t ) > n2(t1). By successive iterations we can then construct a sequence t0, t1, ..., tn, which can be divided in two sub-sequences t0, t1, ...tn and t0 , t1 , ..., tn s.t. n1(t0) < n1(t1) < ... < n1(tn) and n2(t0 ) < n2(t1 ) < ... < n2(tn). One of these sub-sequences has to be infinite. However, n1(ti) and n2(ti ) are strictly growing, integer, and bounded, which implies that both are finite. Contradiction. What the previous result essentially shows is that, in a system where no agent will be isolated from the rest of the agents for ever, only very mild assumptions on the protocols and strategies used by agents suffice to guarantee convergence towards system consistency in a finite amount of time (although it might take very long). Unfortunately, in many critical situations, it will not be possible to assume this temporal connexity. As distributed approaches as the one advocated in this paper are precisely often presented as a good way to tackle problems of reliability or problems of dependence to a center that are of utmost importance in these critical applications, it is certainly interesting to further explore how such a system would behave when we relax this assumption. 5. EXPERIMENTAL STUDY This experiment involves agents trying to escape from a burning building. The environment is described as a spatial grid with a set of walls and (thankfully) some exits. Time and space are considered discrete. Time is divided in rounds. Agents are localised by their position on the spatial grid. These agents can move and communicate with other agents. In a round, an agent can move of one cell in any of the four cardinal directions, provided it is not blocked by a wall. In this application, agents communicate with any other agent (but, recall, a single one) given that this agent is in view, and that they have not yet exchanged their current favoured hypothesis. Suddenly, a fire erupts in these premises. From this moment, the fire propagates. Each round, for each cases where there is fire, the fire propagates in the four directions. However, the fire cannot propagate through a wall. If the fire propagates in a case where an agent is positioned, that agent burns and is considered dead. It can of course no longer move nor communicate. If an agent gets to an exit, it is considered saved, and can no longer be burned. Agents know the environment and the rules governing the dynamics of this environment, that is, they know the map as well as the rules of fire propagation previously described. They also locally perceive this environment, but cannot see further than 3 cases away, in any direction. Walls also block the line of view, preventing agents from seeing behind them. Within their sight, they can see other agents and whether or not the cases they see are on fire. All these perceptions are memorised. We now show how this instantiates the abstract framework presented the paper. • O = {Fire(x, y, t), NoFire(x, y, t), Agent(ai, x, y, t)} Observations can then be positive (o ∈ P(O) iff ∃h ∈ H s.t. h |= o) or negative (o ∈ N(O) iff ∃h ∈ H s.t. h |= ¬o). • H={FireOrigin(x1, y1, t1)∧...∧FireOrigin(xl, yl, tl)} Hypotheses are conjunctions of FireOrigins. • Cons(h, O) consistency relation satisfies: - coherence : ∀o ∈ N(O), h |= ¬o. - completeness : ∀o ∈ P(O), h |= o. - minimality : For all h ∈ H, if h is coherent and complete for O, then h is prefered to h according to the preference relation (h ≤p h ).2 2 Selects first the minimal number of origins, then the most recent (least preemptive strategy [6]), then uses some arbitrary fixed ranking to discriminate ex-aequo. The resulting relation is a total order, hence minimality implies that there will be a single h s.t.Cons(O, h) for a given O. This in turn means that MCons(ai, aj) iff Cons(ai), Cons(aj), and hi = hj. This relation is then transitive and symmetric. 1002 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) • Eh takes O as argument and returns min≤p of the coherent and complete hypothesis for O 5.1 Experimental Evaluation We will classically (see e.g. [3, 4]) assess the effectiveness and efficiency of different interaction protocols. Effectiveness of a protocol The proportion of agents surviving the fire over the initial number of agents involved in the experiment will determine the effectiveness of a given protocol. If this value is high, the protocol has been effective to propagate the information and/or for the agents to refine their hypotheses and determine the best way to the exit. Efficiency of a protocol Typically, the use of supporting information will involve a communication overhead. We will assume here that the efficiency of a given protocol is characterised by the data flow induced by this protocol. In this paper we will only discuss this aspect wrt. local protocols. The main measure that we shall then use here is the mean total size of messages that are exchanged by agents per exchange (hence taking into account both the number of messages and the actual size of the messages, because it could be that messages happen to be very big, containing e.g. a large number of observations, which could counter-balance a low number of messages). 5.2 Experimental Settings The chosen experimental settings are the following: • Environmental topology- Performances of information propagation are highly constrained by the environment topology. The perception skills of the agents depend on the openness of the environment. With a large number of walls the perceptions of agents are limited, and also the number of possible inter-agent communications, whereas an open environment will provide optimal possibilities of perception and information propagation. Thus, we propose a topological index (see below) as a common basis to charaterize the environments (maps) used during experimentations. The topological index (TI) is the ratio of the number of cells that can be perceived by agents summed up from all possible positions, divided by the number of cells that would be perceived from the same positions but without any walls. (The closer to 1, the more open the environment). We shall also use two additional, more classical [10], measures: the characteristic path length3 (CPL) and the clustering coefficient4 (CC). • Number of agents- The propagation of information also depends on the initial number of agents involved during an experimentation. For instance, the more agents, the more potential communications there is. This means that there will be more potential for propagation, but also that the bilateral exchange restriction will be more crucial. 3 The CPL is the median of the means of the shortest path lengths connecting each node to all other nodes. 4 characterising the isolation degree of a region of an environment in terms of acessibility (number of roads still usable to reach this region). Map T.I. (%) C.P.L. C.C. 69-1 69,23 4,5 0,69 69-2 68,88 4,38 0,65 69-3 69,80 4,25 0,67 53-1 53,19 5,6 0,59 53-2 53,53 6,38 0,54 53-3 53,92 6,08 0,61 38-1 38,56 8,19 0,50 38-2 38,56 7,3 0,50 38-3 38,23 8,13 0,50 Table 1: Topological Characteristics of the Maps • Initial positions of the agents- Initial positions of the agents have a significant influence on the overall behavior of an instance of our system: being close from an exit will (in general) ease the escape. 5.3 Experimental environments We choose to realize experiments on three very different topological indexes (69% for open environments, 53% for mixed environments, and 38% for labyrinth-like environments). Figure 2: Two maps (left: TI=69%, right TI=38%) We designed three different maps for each index (Fig. 2 shows two of them), containing the same maximum number of agents (36 agents max.) with a maximum density of one agent per cell, the same number of exits and a similar fire origin (e.g. starting time and position). The three differents maps of a given index are designed as follows. The first map is a model of an existing building floor. The second map has the same enclosure, exits and fire origin as the first one, but the number and location of walls are different (wall locations are designed by an heuristic which randomly creates walls on the spatial grid such that no fully closed rooms are created and that no exit is closed). The third map is characterised by geometrical enclosure in wich walls location is also designed with the aforementioned heuristic. Table 1 summarizes the different topological measures characterizing these different maps. It is worth pointing out that the values confirm the relevance of TI (maps with a high TI have a low CPL and a high CC. However the CPL and CC allows to further refine the difference between the maps, e.g. between 53-1 and 53-2). 5.4 Experimental Results For each triple of maps defined as above we conduct the same experiments. In each experiment, the society differs in terms of its initial proportion of involved agents, from 1% to 100%. This initial proportion represents the percentage of involved agents with regards to the possible maximum number of agents. For each map and each initial proportion, The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1003 we select randomly 100 different initial agents'' locations. For each of those different locations we execute the system one time for each different interaction protocol. Effectiveness of Communication and Argumentation The first experiment that we set up aims at testing how effective is hypotheses exchange (HE), and in particular how the topological aspects will affect this effectiveness. In order to do so, we have computed the ratio of improvement offered by that protocol over a situation where agents could simply not communicate (no comm). To get further insights as to what extent the hypotheses exchange was really crucial, we also tested a much less elaborated protocol consisting of mere observation exchanges (OE). More precisely, this protocol requires that each agent stores any unexpected observation that it perceives, and agents simply exchange their respective lists of observations when they discuss. In this case, the local protocol is different (note in particular that it does not guarantee mutual consistency), but the global protocol remains the same (at the only exception that agents'' motivation to communicate is to synchronise their list of observations, not their hypothesis). If this protocol is at best as effective as HE, it has the advantage of being more efficient (this is obvious wrt the number of messages which will be limited to 2, less straightforward as far as the size of messages is concerned, but the rough observation that the exchange of observations can be viewed as a flat version of the challenge is helpful to see this). The results of these experiments are reported in Fig. 3. Figure 3: Comparative effectiveness ratio gain of protocols when the proportion of agents augments The first observation that needs to be made is that communication improves the effectiveness of the process, and this ratio increases as the number of agents grows in the system. The second lesson that we learn here is that closeness relatively makes communication more effective over non communication. Maps exhibiting a T.I. of 38% are constantly above the two others, and 53% are still slightly but significantly better than 69%. However, these curves also suggest, perhaps surprisingly, that HE outperforms OE in precisely those situations where the ratio gain is less important (the only noticeable difference occurs for rather open maps where T.I. is 69%). This may be explained as follows: when a map is open, agents have many potential explanation candidates, and argumentation becomes useful to discriminate between those. When a map is labyrinth-like, there are fewer possible explanations to an unexpected event. Importance of the Global Protocol The second set of experiments seeks to evaluate the importance of the design of the global protocol. We tested our protocol against a local broadcast (LB) protocol. Local broadcast means that all the neighbours agents perceived by an agent will be involved in a communication with that agent in a given round -we alleviate the constraint of a single communication by agent. This gives us a rough upper bound upon the possible ratio gain in the system (for a given local protocol). Again, we evaluated the ratio gain induced by that LB over our classical HE, for the three different classes of maps. The results are reported in Fig. 4. Figure 4: Ratio gain of local broadcast over hypotheses exchange Note to begin with that the ratio gain is 0 when the proportion of agents is 5%, which is easily explained by the fact that it corresponds to situations involving only two agents. We first observe that all classes of maps witness a ratio gain increasing when the proportion of agents augments: the gain reaches 10 to 20%, depending on the class of maps considered. If one compares this with the improvement reported in the previous experiment, it appears to be of the same magnitude. This illustrates that the design of the global protocol cannot be ignored, especially when the proportion of agents is high. However, we also note that the effectiveness ratio gain curves have very different shapes in both cases: the gain induced by the accuracy of the local protocol increases very quickly with the proportion of agents, while the curve is really smooth for the global one. Now let us observe more carefully the results reported here: the curve corresponding to a TI of 53% is above that corresponding to 38%. This is so because the more open a map, the more opportunities to communicate with more than one agent (and hence benefits from broadcast). However, we also observe that curve for 69% is below that for 53%. This is explained as follows: in the case of 69%, the potential gain to be made in terms of surviving agents is much lower, because our protocols already give rather efficient outcomes anyway (quickly reaching 90%, see Fig. 3). A simple rule of thumb could be that when the number of agents is small, special attention should be put on the local 1004 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) protocol, whereas when that number is large, one should carefully design the global one (unless the map is so open that the protocol is already almost optimally efficient). Efficiency of the Protocols The final experiment reported here is concerned with the analysis of the efficiency of the protocols. We analysis here the mean size of the totality of the messages that are exchanged by agents (mean size of exchanges, for short) using the following protocols: HE, OE, and two variant protocols. The first one is an intermediary restricted hypotheses exchange protocol (RHE). RHE is as follows: it does not involve any challenge nor counter-propose, which means that agents cannot switch their role during the protocol (this differs from RE in that respect). In short, RHE allows an agent to exhaust its partner``s criticism, and eventually this partner will come to adopt the agent``s hypothesis. Note that this means that the autonomy of the agent is not preserved here (as an agent will essentially accept any hypothesis it cannot undermine), with the hope that the gain in efficiency will be significant enough to compensate a loss in effectiveness. The second variant protocol is a complete observation exchange protocol (COE). COE uses the same principles as OE, but includes in addition all critical negative examples (nofire) in the exchange (thus giving all examples used as arguments by the hypotheses exchanges protocol), hence improving effectiveness. Results for map 69-1 are shown on Fig. 5. Figure 5: Mean size of exchanges First we can observe the fact that the ordering of the protocols, from the least efficient to the most efficient, is COE, HE, RHE and then OE. HE being more efficient than COE proves that the argumentation process gains efficiency by selecting when it is needed to provide negative example, which have less impact that positive ones in our specific testbed. However, by communicating hypotheses before eventually giving observation to support it (HE) instead of directly giving the most crucial observations (OE), the argumentation process doubles the size of data exchanges. It is the cost for ensuring consistency at the end of the exchange (a property that OE does not support). Also significant is the fact the the mean size of exchanges is slightly higher when the number of agents is small. This is explained by the fact that in these cases only a very few agents have relevant informations in their possession, and that they will need to communicate a lot in order to come up with a common view of the situation. When the number of agents increases, this knowledge is distributed over more agents which need shorter discussions to get to mutual consistency. As a consequence, the relative gain in efficiency of using RHE appears to be better when the number of agents is small: when it is high, they will hardly argue anyway. Finally, it is worth noticing that the standard deviation for these experiments is rather high, which means that the conversation do not converge to any stereotypic pattern. 6. CONCLUSION This paper has investigated the properties of a multiagent system where each (distributed) agent locally perceives its environment, and tries to reach consistency with other agents despite severe communication restrictions. In particular we have exhibited conditions allowing convergence, and experimentally investigated a typical situation where those conditions cannot hold. There are many possible extensions to this work, the first being to further investigate the properties of different global protocols belonging to the class we identified, and their influence on the outcome. There are in particular many heuristics, highly dependent on the context of the study, that could intuitively yield interesting results (in our study, selecting the recipient on the basis of what can be inferred from his observed actions could be such a heuristic). One obvious candidate for longer term issues concern the relaxation of the assumption of perfect sensing. 7. REFERENCES [1] G. Bourgne, N. Maudet, and S. Pinson. When agents communicate hypotheses in critical situations. In Proceedings of DALT-2006, May 2006. [2] P. Harvey, C. F. Chang, and A. Ghose. Support-based distributed search: a new approach for multiagent constraint processing. In Proceedings of AAMAS06, 2006. [3] H. Jung and M. Tambe. Argumentation as distributed constraint satisfaction: Applications and results. In Proceedings of AGENTS01, 2001. [4] N. C. Karunatillake and N. R. Jennings. Is it worth arguing? In Proceedings of ArgMAS 2004, 2004. [5] S. Onta˜n´on and E. Plaza. Arguments and counterexamples in case-based joint deliberation. In Proceedings of ArgMAS-2006, May 2006. [6] D. Poole. Explanation and prediction: An architecture for default and abductive reasoning. Computational Intelligence, 5(2):97-110, 1989. [7] I. Rahwan, S. D. Ramchurn, N. R. Jennings, P. McBurney, S. Parsons, and L. Sonenberg. Argumention-based negotiation. The Knowledge Engineering Review, 4(18):345-375, 2003. [8] N. Roos, A. ten Tije, and C. Witteveen. A protocol for multi-agent diagnosis with spatially distributed knowledge. In Proceedings of AAMAS03, 2003. [9] N. Roos, A. ten Tije, and C. Witteveen. Reaching diagnostic agreement in multiagent diagnosis. In Proceedings of AAMAS04, 2004. [10] T. Takahashi, Y. Kaneda, and N. Ito. Preliminary study - using robocuprescue simulations for disasters prevention. In Proceedings of SRMED2004, 2004. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1005
Hypotheses Refinement under Topological Communication Constraints * ABSTRACT We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment. Upon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations). However, we further assume that communication opportunities are severely constrained and change dynamically. In this paper, we mostly investigate the convergence of such systems towards global consistency. We first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange. As this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange. We study a critical situation involving a number of agents aiming at escaping from a burning building. The results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. 1. INTRODUCTION We consider a multiagent system where each (distributed) agent locally perceives its environment, and we assume that some unexpected event occurs in that system. If each agent computes only locally its favoured hypothesis, it is only natural to assume that agents will seek to coordinate and refine their hypotheses by confronting their observations with other agents. If, in addition, the communication opportunities are severely constrained (for instance, agents can only communicate when they are close enough to some other agent), and dynamically changing (for instance, agents may change their locations), it becomes crucial to carefully design protocols that will allow agents to converge to some desired state of global consistency. In this paper we exhibit some sufficient conditions on the system dynamics and on the protocol/strategy structures that allow to guarantee that property, and we experimentally study some contexts where (some of) these assumptions are relaxed. While problems of diagnosis are among the venerable classics in the AI tradition, their multiagent counterparts have much more recently attracted some attention. Roos and colleagues [8, 9] in particular study a situation where a number of distributed entities try to come up with a satisfying global diagnosis of the whole system. They show in particular that the number of messages required to establish this global diagnosis is bound to be prohibitive, unless the communication is enhanced with some suitable protocol. However, they do not put any restrictions on agents' communication options, and do not assume either that the system is dynamic. The benefits of enhancing communication with supporting information to make convergence to a desired global state of a system more efficient has often been put forward in the literature. This is for instance one of the main idea underlying the argumentation-based negotiation approach [7], where the desired state is a compromise between agents with conflicting preferences. Many of these works however make the assumption that this approach is beneficial to start with, and study the technical facets of the problem (or instead emphasize other advantages of using argumentation). Notable exceptions are the works of [3, 4, 2, 5], which studied in contexts different from ours the efficiency of argumentation. The rest of the paper is as follows. Section 2 specifies the basic elements of our model, and Section 3 goes on to presenting the different protocols and strategies used by the agents to exchange hypotheses and observations. We put special attention at clearly emphasizing the conditions on the system dynamics and protocols/strategies that will be exploited in the rest of the paper. Section 4 details one of the main results of the paper, namely the fact that under the aforementioned conditions, the constraints that we put on the topology will not prevent the convergence of the system towards global consistency, at the condition that no agent ever gets completely "lost" forever in the system, and that unlimited time is allowed for computation and argument exchange. While the conditions on protocols and strategies are fairly mild, it is also clear that these system requirements look much more problematic, even frankly unrealistic in critical situations where distributed approaches are precisely advocated. To get a clearer picture of the situation induced when time is a critical factor, we have set up an experimental framework that we introduce and discuss in Section 5. The critical situation involves a number of agents aiming at escaping from a burning building. The results reported here show that the effectiveness of argument exchange crucially depends upon the nature of the building, and provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. 2. BASIC NOTIONS We start by defining the basic elements of our system. Environment Let O be the (potentially infinite) set of possible observations. We assume the sensors of our agents to be perfect, hence the observations to be certain. Let 7 {be the set of hypotheses, uncertain and revisable. Let Cons (h, O) be the consistency relation, a binary relation between a hypothesis h ∈ 7 {and a set of observations O C O. In most cases, Cons will refer to classical consistency relation, however, we may overload its meaning and add some additional properties to that relation (in which case we will mention it). The environment may include some dynamics, and change over the course of time. We define below sequences of time points to deal with it: DEFINITION 1 (SEQUENCE OF TIME POINTS). A se quence of time points t1, t2,..., tn from t is an ordered set of time points t1, t2,..., tn such that t1> t and Vi ∈ [1, n − 1], ti +1> ti. Agent We take a system populated by n agents a1,..., an. Each agent is defined as a tuple (F, Oi, hi), where: • F, the set of facts, common knowledge to all agents. • Oi ∈ 2O, the set of observations made by the agent so far. We assume a perfect memory, hence this set grows monotonically. • hi ∈ 7 {, the favourite hypothesis of the agent. A key notion governing the formation of hypotheses is that of consistency, defined below: DEFINITION 2 (CONSISTENCY). We say that: • An agent is consistent (Cons (ai)) iff Cons (hi, Oi) (that is, its hypothesis is consistent with its observation set). • An agent ai consistent with a partner agent aj iff Cons (ai) and Cons (hi, Oj) (that is, this agent is consistent and its hypothesis can explain the observation set of the other agent). • Two agents ai and aj are mutually consistent (MCons (ai, aj)) iff Cons (ai, aj) and Cons (aj, ai). • A system is consistent iff V (i, j) ∈ [1, n] 2 it is the case that MCons (ai, aj). To ensure its consistency, each agent is equipped with an abstract reasoning machinery that we shall call the explanation function Eh. This (deterministic) function takes a set of observation and returns a single prefered hypothesis (2O--* 7 {). We assume h = Eh (O) to be consistent with O by definition of Eh, so using this function on its observation set to determine its favourite hypothesis is a sure way for the agent to achieve consistency. Note however that an hypothesis does not need to be generated by Eh to be consistent with an observation set. As a concrete example of such a function, and one of the main inspiration of this work, one can cite the Theorist reasoning system [6]--as long as it is coupled with a filter selecting a single prefered theory among the ones initially selected by Theorist. Note also that hi may only be modified as a consequence of the application Eh. We refer to this as the autonomy of the agent: no other agent can directly impose a given hypothesis to an agent. As a consequence, only a new observation (being it a new perception, or an observation communicated by a fellow agent) can result in a modification of its prefered hypothesis hi (but not necessarily of course). We finally define a property of the system that we shall use in the rest of the paper: Agent Cycle Now we need to see how these agents will evolve and interact in their environment. In our context, agents evolve in a dynamic environment, and we classicaly assume the following system cycle: 1. Environment dynamics: the environment evolves according to the defined rules of the system dynamics. 2. Perception step: agents get perceptions from the environment. These perceptions are typically partial (e.g. the agent can only see a portion of a map). 3. Reasoning step: agents compare perception with predictions, seek explanations for (potential) difference (s), refine their hypothesis, draw new conclusions. 4. Communication step: agents can communicate hypotheses and observations with other agents through a defined protocol. Any agent can only be involved in one communication with another agent by step. 5. Action step: agents do some practical reasoning using the models obtained from the previous steps and select an action. They can then modify the environment by executing it. The communication of the agents will be further constrained by topological consideration. At a given time, an agent will only be able to communicate with a number of neighbours. Its connexions with these others agents may The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 999 evolve with its situation in the environment. Typically, an agent can only communicate with agents that it can sense, but one could imagine evolving topological constraints on communication based on a network of communications between agents where the links are not always active. Communication In our system, agents will be able to communicate with each other. However, due to the aforementionned topological constraints, they will not be able to communicate with any agents at anytime. Who an agent can communicate with will be defined dynamically (for instance, this can be a consequence of the agents being close enough to get in touch). We will abstractly denote by C (ai, aj, t) the communication property, in other words, the fact that agents ai and aj can communicate at time t (note that this relation is assumed to be symetric, but of course not transitive). We are now in a position to define two essential properties of our system. DEFINITION 4 (TEMPORAL PATH). There exists a temporal communication path at horizon tf (noted Ltf (aI, aJ)) between ai and aj iff there exists a sequence of time points t1, t2,..., tn from tf and a sequence of agents k1, k2,..., kn s.t. (i) C (aI, ak1, t1), (ii) C (akn, aJ, tn +1), (iii) b' i ∈ [1, n], C (aki, aki +1, ti) Intuitively, what this property says is that it is possible to find a "temporal path" in the future that would allow to link agent ai and aj via a sequence of intermediary agents. Note that the time points are not necessarily successive, and that the sequence of agents may involve the same agents several times. In short, a temporaly connex system guarantees that any agent will be able to communicate with any other agents, no matter how long it might take to do so, at any time. To put it another way, it is never the case that an agent will be isolated for ever from another agent of the system. We will next discuss the detail of how communication concretely takes place in our system. Remember that in this paper, we only consider the case of bilateral exchanges (an agent can only speak to a single other agent), and that we also assume that any agent can only engage in a single exchange in a given round. 3. PROTOCOLS AND STRATEGIES In this section, we discuss the requirements of the interaction protocols that govern the exchange of messages between agents, and provide some example instantiation of such protocols. To clarify the presentation, we distinguish two levels: the local level, which is concerned with the regulation of bilateral exchanges; and the global level, which essentially regulates the way agents can actually engage into a conversation. At each level, we separate what is specified by the protocol, and what is left to agents' strategies. Local Protocol and Strategies We start by inspecting local protocols and strategies that will regulate the communication between the agents of the system. As we limit ourselves to bilateral communication, these protocols will simply involve two agents. Such protocol will have to meet one basic requirement to be satisfying. • consistency (CONS)--a local protocol has to guarantee the mutual consistency of agents upon termination (which implies termination of course). Figure 1: A Hypotheses Exchange Protocol [1] One example such protocol is the protocol described in [1] that is pictured in Fig. 1. To further illustrate how such protocol can be used by agents, we give some details on a possible strategy: upon receiving a hypothesis h1 (propose (h1) or counterpropose (h1)) from a1, agent a2 is in state 2 and has the following possible replies: counterexample (if the agent knows an example contradicting the hypothesis, or not explained by this hypothesis), challenge (if the agents lacks evidence to accept this hypothesis), counterpropose (if the agent agrees with the hypothesis but prefers another one), or accept (if it is indeed as good as its favourite hypothesis). This strategy guarantees, among other properties, the eventual mutual logical consistency of the involved agents [1]. Global Protocol The global protocol regulates the way bilateral exchanges will be initiated between agents. At each turn, agents will concurrently send one weighted request to communicate to other agents. This weight is a value measuring the agent's willingness to converse with the targeted agent (in practice, this can be based on different heuristics, but we shall make some assumptions on agents' strategies, see below). Sending such a request is a kind of conditional commitment for the agent. An agent sending a weighted request commits to engage in conversation with the target if he does not receive and accept himself another request. Once all request have been received, each agent replies with either an acccept or a reject. By answering with an accept, an agent makes a full commitment to engage in conversation with the sender. Therefore, it can only send one accept in a given round, as an agent can only participate in one conversation per time step. When all response have been received, each agent receiving an accept can either initiate a conversation using the local protocol or send a cancel if it has accepted another request. At the end of all the bilateral exchanges, the agents engaged in conversation are discarded from the protocol. Then each of the remaining agents resends a request and the process iterates until no more requests are sent. Global Strategy We now define four requirements for the strategies used by agents, depending on their role in the protocol: two are concerned with the requestee role (how to decide who the 1000 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) agent wishes to communicate with?) , the other two with the responder role (how to decide which communication request to accept or not?) . • Willingness to solve inconsistancies (SOLVE)--agents want to communicate with any other agents unless they know they are mutually consistent. • Focus on solving inconsistencies (FOCUS)--agents do not request communication with an agent with whom they know they are mutually consistent. • Willingness to communicate (COMM)--agents cannot refuse a weighted communication request, unless they have just received or send a request with a greater weight. • Commitment to communication request (REQU)--agents cannot accept a weighted communication request if they have themselves sent a communication request with a greater weight. Therefore, they will not cancel their request unless they have received a communicational request with greater weight. Now the protocol structure, together with the properties COMM+REQU, ensure that a request can only be rejected if its target agent engages in communication with another agent. Suppose indeed that agent ai wants to communicate with aj by sending a request with weight w. COMM guarantees that an agent receiving a weighted request will either accept this communication, accept a communication with a greater weight or wait for the answer to a request with a greater weight. This ensures that the request with maximal weight will be accepted and not cancelled (as REQU ensures that an agent sending a request can only cancel it if he accepts another request with greater weight). Therefore at least two agents will engage in conversation per round of the global protocol. As the protocol ensures that ai can resend its request while aj is not engaged in a conversation, there will be a turn in which aj must engage in a conversation, either with ai or another agent. These requirements concern request sending and acceptation, but agents also need some strategy of weight attribution. We describe below an altruist strategy, used in our experiments. Being cooperative, an agent may want to know more of the communication wishes of other agents in order to improve the overall allocation of exchanges to agents. A context request step is then added to the global protocol. Before sending their chosen weighted request, agents attribute a weight to all agents they are prepared to communicate with, according to some internal factors. In the simplest case, this weight will be 1 for all agent with whom the agent is not sure of being mutually consistent (ensuring SOLVE), other agent being not considered for communication (ensuring FOCUS). The agent then sends a context request to all agents with whom communication is considered. This request also provides information about the sender (list of considered communications along with their weight). After reception of all the context requests, agents will either reply with a deny, iff they are already engaged in a conversation (in which case, the requesting agent will not consider communication with them anymore in this turn), or an inform giving the requester information about the requests it has sent and received. When all replies have been received, each agent can calculate the weight of all requests concerning it. It does so by substracting from the weight of its request the weight of all requests concerning either it or its target (that is, the final weight of the request from ai to aj is Wi, j = wi, j + wj, i − k ∈ S (j) − {i} wk, j) where wi, j is the weight of the request of ai to aj, R (i) is the set of indice of agents having received a request from ai and S (i) is the set of indice of agents having send a request to ai). It then finally sends a weighted request to the agents who maximise this weight (or wait for a request) as described in the global protocol. 4. (CONDITIONAL) CONVERGENCE TO GLOBAL CONSISTENCY In this section we will show that the requirements regarding protocols and strategies just discussed will be sufficient to ensure that the system will eventually converge towards global consistency, under some conditions. We first show that, if two agents are not mutually consistent at some time, then there will be necessarily a time in the future such that an agent will learn a new observation, being it because it is new for the system, or by learning it from another agent. LEMMA 1. Let S be a system populated by n agents a1, a2,..., an, temporaly connex, and involving bounded perceptions for these agents. Let n1 be the sum of cardinalities of the intersection of pairwise observation sets. (n1 = E (i, j) ∈ [1, n] 2 | Oi ∩ Oj |) Let n2 be the cardinality of the union of all agents' observations sets. (n2 = | ∪ Ni = 1 Oi |). If ¬ MCons (ai, aj) at time t0, there is necessarily a time t'> t0 s.t. either n1 or n2 will increase. PROOF. Suppose that there exist a time t0 and indices (i, j) s.t. ¬ MCons (ai, aj). We will εComm (ak, al, t0) = 1 if ak and al have communicated at least once since t0, and 0 otherwise. Temporal connexity guarantees that there exist t1,..., tm +1 and k1,..., km s.t. C (ai, ak1, t1), C (akm, aj, tm +1), and ∀ p ∈ [1, m], C (akp, akp +1, tp). Clearly, if MCons (ai, ak1), MCons (akm, aj) and ∀ p, MCons (akp, akp +1), we have MCons (ai, aj) which contradicts our hypothesis (MCons being transitive, MCons (ai, ak1) ∧ MCons (ak1, ak2) implies that MCons (ai, ak2) and so on till MCons (ai, akm) ∧ MCons (akm, aj) which implies MCons (ai, aj)). At least two agents are then necessarily inconsistent (¬ MCons (ai, ak1), or ¬ MCons (akm, aj), or ∃ p0 t.q. ¬ MCons (akp0, akp0 +1)). Let ak and al be these two neighbours at a time t'> t0 1. The SOLVE property ensures that either ak or al will send a communication request to the other agent at time t'. As shown before, this in turn ensures that at least one of these agents will be involved in a communication. Then there are two possibilities: (case i) ak and al communicate at time t'. In this case, we know that ¬ MCons (ak, al). This and the CONS property ensures that at least one of the agents must change its 1Strictly speaking, the transitivity of MCons only ensure that ak and al are inconsistent at a time t' ' ≥ t0 that can be different from the time t' at which they can communicate. But if they become consistent between t' ' and t' (or inconsistent between t' and t' '), it means that at least one of them have changed its hypothesis between t' ' and t', that is, after t0. We can then apply the reasoning of case iib. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1001 hypothesis, which in turn, since agents are autonomous, implies at least one exchange of observation. But then | OknOl | is bound to increase: n1 (t')> n1 (t0). (case ii) ak communicates with ap at time t'. We then have again two possibilities: (case iia) ak and ap did not communicate since t0. But then εComm (ak, ap, t0) had value 0 and takes value 1. Hence int0 increases. (case iib) ak and ap did communicate at some time t' 0> t0. The CONS property of the protocol ensures that MCons (ak, ap) at that time. Now the fact that they communicate and FOCUS implies that at least one of them did change its hypothesis in the meantime. The fact that agents are autonomous implies in turn that a new observation (perceived or received from another agent) necessarily provoked this change. The latter case would ensure the existence of a time t' '> t0 and an agent aq s.t. either | Op nOq | or | Ok nOq | increases of 1 at that time (implying n1 (t' ')> n1 (t0)). The former case means that the agent gets a new perception o at time t' '. If that observation was unknown in the system before, then n2 (t' ')> n2 (t0). If some agent aq already knew this observation before, then either Op n Oq or Ok n Oq increases of 1 at time t' ' (which implies that n1 (t' ')> n1 (t0)). Hence,--MCons (ai, aj) at time t0 guarantees that, either: By iterating the reasoning with t' (but keeping t0 as the time reference for int0), we can eliminate the third case (int0 is integer and bounded by n2, which means that after a maximum of n2 iterations, we necessarily will be in one of two other cases.) As a result, we have proven that if--MCons (ai, aj) at time t0, there is necessarily a time t' s.t. either n1 or n2 will increase. THEOREM 1 (GLOBAL CONSISTENCY). Let S be a system populated by n agents a1, a2,..., an, temporaly connex, and involving bounded perceptions for these agents. Let Cons (ai, aj) be a transitive consistency property. Then any protocol and strategies satisfying properties CONS, SOLVE, FOCUS, COMM and REQU guarantees that the system will converge towards global consistency. PROOF. For the sake of contradiction, let us assume 3I, J E [1, N] s.t. Vt, 3t0> t, t.q.--Cons (aI, aJ, t0). Using the lemma, this implies that 3t'> t0 s.t. either n1 (t')> n1 (t0) or n2 (t')> n2 (t0). But we can apply the same reasoning taking t = t', which would give us t1> t'> t0 s.t.--Cons (aI, aJ, t1), which gives us t' '> t1 s.t. either n1 (t' ')> n1 (t1) or n2 (t' ')> n2 (t1). By successive iterations we can then construct a sequence t0, t1,..., tn, which can be divided in two sub-sequences t' 0, t' 1,...t 'n and t' ' 0, t' ' 1,..., t' ' n s.t. n1 (t' 0) <n1 (t' 1) <... <n1 (t 'n) and n2 (t' ' 0) <n2 (t' ' 1) <... <n2 (t' 'n). One of these sub-sequences has to be infinite. However, n1 (t' i) and n2 (t' ' i) are strictly growing, integer, and bounded, which implies that both are finite. Contradiction. What the previous result essentially shows is that, in a system where no agent will be isolated from the rest of the agents for ever, only very mild assumptions on the protocols and strategies used by agents suffice to guarantee convergence towards system consistency in a finite amount of time (although it might take very long). Unfortunately, in many "critical" situations, it will not be possible to assume this temporal connexity. As distributed approaches as the one advocated in this paper are precisely often presented as a good way to tackle problems of reliability or problems of dependence to a center that are of utmost importance in these critical applications, it is certainly interesting to further explore how such a system would behave when we relax this assumption. 5. EXPERIMENTAL STUDY This experiment involves agents trying to escape from a burning building. The environment is described as a spatial grid with a set of walls and (thankfully) some exits. Time and space are considered discrete. Time is divided in rounds. Agents are localised by their position on the spatial grid. These agents can move and communicate with other agents. In a round, an agent can move of one cell in any of the four cardinal directions, provided it is not blocked by a wall. In this application, agents communicate with any other agent (but, recall, a single one) given that this agent is in view, and that they have not yet exchanged their current favoured hypothesis. Suddenly, a fire erupts in these premises. From this moment, the fire propagates. Each round, for each cases where there is fire, the fire propagates in the four directions. However, the fire cannot propagate through a wall. If the fire propagates in a case where an agent is positioned, that agent burns and is considered dead. It can of course no longer move nor communicate. If an agent gets to an exit, it is considered saved, and can no longer be burned. Agents know the environment and the rules governing the dynamics of this environment, that is, they know the map as well as the rules of fire propagation previously described. They also locally perceive this environment, but cannot see further than 3 cases away, in any direction. Walls also block the line of view, preventing agents from seeing behind them. Within their sight, they can see other agents and whether or not the cases they see are on fire. All these perceptions are memorised. We now show how this instantiates the abstract framework presented the paper. • O = {Fire (x, y, t), NoFire (x, y, t), Agent (ai, x, y, t)} Observations can then be positive (o E P (O) iff 3h E H s.t. h | = o) or negative (o E N (O) iff 3h E H s.t. h | =--o). • H = {FireOrigin (x1, y1, t1) ∧...∧ FireOrigin (xl, yl, tl)} Hypotheses are conjunctions of FireOrigins. • Cons (h, O) consistency relation satisfies:--coherence: Vo E N (O), h ~ | =--o.--completeness: Vo E P (O), h | = o. -- minimality: For all h' E H, if h' is coherent and complete for O, then h is prefered to h' according to the preference relation (h <p h').2 2Selects first the minimal number of origins, then the most recent (least preemptive strategy [6]), then uses some arbitrary fixed ranking to discriminate ex-aequo. The resulting relation is a total order, hence minimality implies that there will be a single h s.t.Cons (O, h) for a given O. This in turn means that MCons (ai, aj) iff Cons (ai), Cons (aj), and hi = hj. This relation is then transitive and symmetric. 1002 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) • Eh takes O as argument and returns min ≤ p of the coherent and complete hypothesis for O 5.1 Experimental Evaluation We will classically (see e.g. [3, 4]) assess the effectiveness and efficiency of different interaction protocols. Effectiveness of a protocol The proportion of agents surviving the fire over the initial number of agents involved in the experiment will determine the effectiveness of a given protocol. If this value is high, the protocol has been effective to propagate the information and/or for the agents to refine their hypotheses and determine the best way to the exit. Efficiency of a protocol Typically, the use of supporting information will involve a communication overhead. We will assume here that the efficiency of a given protocol is characterised by the data flow induced by this protocol. In this paper we will only discuss this aspect wrt. local protocols. The main measure that we shall then use here is the mean total size of messages that are exchanged by agents per exchange (hence taking into account both the number of messages and the actual size of the messages, because it could be that messages happen to be very "big", containing e.g. a large number of observations, which could counter-balance a low number of messages). 5.2 Experimental Settings The chosen experimental settings are the following: • Environmental topology--Performances of informa tion propagation are highly constrained by the environment topology. The perception skills of the agents depend on the "openness" of the environment. With a large number of walls the perceptions of agents are limited, and also the number of possible inter-agent communications, whereas an "open" environment will provide optimal possibilities of perception and information propagation. Thus, we propose a topological index (see below) as a common basis to charaterize the environments (maps) used during experimentations. The topological index (TI) is the ratio of the number of cells that can be perceived by agents summed up from all possible positions, divided by the number of cells that would be perceived from the same positions but without any walls. (The closer to 1, the more open the environment). We shall also use two additional, more classical [10], measures: the characteristic path length3 (CPL) and the clustering coefficient4 (CC). • Number of agents--The propagation of information also depends on the initial number of agents involved during an experimentation. For instance, the more agents, the more potential communications there is. This means that there will be more potential for propagation, but also that the bilateral exchange restriction will be more crucial. Table 1: Topological Characteristics of the Maps • Initial positions of the agents--Initial positions of the agents have a significant influence on the overall behavior of an instance of our system: being close from an exit will (in general) ease the escape. 5.3 Experimental environments We choose to realize experiments on three very different topological indexes (69% for "open" environments, 53% for mixed environments, and 38% for labyrinth-like environments). Figure 2: Two maps (left: TI = 69%, right TI = 38%) We designed three different maps for each index (Fig. 2 shows two of them), containing the same maximum number of agents (36 agents max.) with a maximum density of one agent per cell, the same number of exits and a similar fire origin (e.g. starting time and position). The three differents maps of a given index are designed as follows. The first map is a model of an existing building floor. The second map has the same "enclosure", exits and fire origin as the first one, but the number and location of walls are different (wall locations are designed by an heuristic which randomly creates walls on the spatial grid such that no fully closed rooms are created and that no exit is closed). The third map is characterised by geometrical "enclosure" in wich walls location is also designed with the aforementioned heuristic. Table 1 summarizes the different topological measures characterizing these different maps. It is worth pointing out that the values confirm the relevance of TI (maps with a high TI have a low CPL and a high CC. However the CPL and CC allows to further refine the difference between the maps, e.g. between 53-1 and 53-2). 5.4 Experimental Results For each triple of maps defined as above we conduct the same experiments. In each experiment, the society differs in terms of its initial proportion of involved agents, from 1% to 100%. This initial proportion represents the percentage of involved agents with regards to the possible maximum number of agents. For each map and each initial proportion, The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1003 we select randomly 100 different initial agents' locations. For each of those different locations we execute the system one time for each different interaction protocol. Effectiveness of Communication and Argumentation The first experiment that we set up aims at testing how effective is hypotheses exchange (HE), and in particular how the topological aspects will affect this effectiveness. In order to do so, we have computed the ratio of improvement offered by that protocol over a situation where agents could simply not communicate (no comm). To get further insights as to what extent the hypotheses exchange was really crucial, we also tested a much less elaborated protocol consisting of mere observation exchanges (OE). More precisely, this protocol requires that each agent stores any "unexpected" observation that it perceives, and agents simply exchange their respective lists of observations when they discuss. In this case, the local protocol is different (note in particular that it does not guarantee mutual consistency), but the global protocol remains the same (at the only exception that agents' motivation to communicate is to synchronise their list of observations, not their hypothesis). If this protocol is at best as effective as HE, it has the advantage of being more efficient (this is obvious wrt the number of messages which will be limited to 2, less straightforward as far as the size of messages is concerned, but the rough observation that the exchange of observations can be viewed as a "flat" version of the challenge is helpful to see this). The results of these experiments are reported in Fig. 3. Figure 3: Comparative effectiveness ratio gain of protocols when the proportion of agents augments The first observation that needs to be made is that communication improves the effectiveness of the process, and this ratio increases as the number of agents grows in the system. The second lesson that we learn here is that closeness relatively makes communication more effective over non communication. Maps exhibiting a T.I. of 38% are constantly above the two others, and 53% are still slightly but significantly better than 69%. However, these curves also suggest, perhaps surprisingly, that HE outperforms OE in precisely those situations where the ratio gain is less important (the only noticeable difference occurs for rather open maps where T.I. is 69%). This may be explained as follows: when a map is open, agents have many potential explanation candidates, and argumentation becomes useful to discriminate between those. When a map is labyrinth-like, there are fewer possible explanations to an unexpected event. Importance of the Global Protocol The second set of experiments seeks to evaluate the importance of the design of the global protocol. We tested our protocol against a "local broadcast" (LB) protocol. Local broadcast means that all the neighbours agents perceived by an agent will be involved in a communication with that agent in a given round--we alleviate the constraint of a single communication by agent. This gives us a rough upper bound upon the possible ratio gain in the system (for a given local protocol). Again, we evaluated the ratio gain induced by that LB over our classical HE, for the three different classes of maps. The results are reported in Fig. 4. Figure 4: Ratio gain of local broadcast over hypotheses exchange Note to begin with that the ratio gain is 0 when the proportion of agents is 5%, which is easily explained by the fact that it corresponds to situations involving only two agents. We first observe that all classes of maps witness a ratio gain increasing when the proportion of agents augments: the gain reaches 10 to 20%, depending on the class of maps considered. If one compares this with the improvement reported in the previous experiment, it appears to be of the same magnitude. This illustrates that the design of the global protocol cannot be ignored, especially when the proportion of agents is high. However, we also note that the effectiveness ratio gain curves have very different shapes in both cases: the gain induced by the accuracy of the local protocol increases very quickly with the proportion of agents, while the curve is really smooth for the global one. Now let us observe more carefully the results reported here: the curve corresponding to a TI of 53% is above that corresponding to 38%. This is so because the more open a map, the more opportunities to communicate with more than one agent (and hence benefits from broadcast). However, we also observe that curve for 69% is below that for 53%. This is explained as follows: in the case of 69%, the potential gain to be made in terms of surviving agents is much lower, because our protocols already give rather efficient outcomes anyway (quickly reaching 90%, see Fig. 3). A simple rule of thumb could be that when the number of agents is small, special attention should be put on the local 1004 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) protocol, whereas when that number is large, one should carefully design the global one (unless the map is so open that the protocol is already almost optimally efficient). Efficiency of the Protocols The final experiment reported here is concerned with the analysis of the efficiency of the protocols. We analysis here the mean size of the totality of the messages that are exchanged by agents (mean size of exchanges, for short) using the following protocols: HE, OE, and two variant protocols. The first one is an intermediary restricted hypotheses exchange protocol (RHE). RHE is as follows: it does not involve any challenge nor counter-propose, which means that agents cannot switch their role during the protocol (this differs from RE in that respect). In short, RHE allows an agent to exhaust its partner's criticism, and eventually this partner will come to adopt the agent's hypothesis. Note that this means that the autonomy of the agent is not preserved here (as an agent will essentially accept any hypothesis it cannot undermine), with the hope that the gain in efficiency will be significant enough to compensate a loss in effectiveness. The second variant protocol is a complete observation exchange protocol (COE). COE uses the same principles as OE, but includes in addition all critical negative examples (nofire) in the exchange (thus giving all examples used as arguments by the hypotheses exchanges protocol), hence improving effectiveness. Results for map 69-1 are shown on Fig. 5. Figure 5: Mean size of exchanges First we can observe the fact that the ordering of the protocols, from the least efficient to the most efficient, is COE, HE, RHE and then OE. HE being more efficient than COE proves that the argumentation process gains efficiency by selecting when it is needed to provide negative example, which have less impact that positive ones in our specific testbed. However, by communicating hypotheses before eventually giving observation to support it (HE) instead of directly giving the most crucial observations (OE), the argumentation process doubles the size of data exchanges. It is the cost for ensuring consistency at the end of the exchange (a property that OE does not support). Also significant is the fact the the mean size of exchanges is slightly higher when the number of agents is small. This is explained by the fact that in these cases only a very few agents have relevant informations in their possession, and that they will need to communicate a lot in order to come up with a common view of the situation. When the number of agents increases, this knowledge is distributed over more agents which need shorter discussions to get to mutual consistency. As a consequence, the relative gain in efficiency of using RHE appears to be better when the number of agents is small: when it is high, they will hardly argue anyway. Finally, it is worth noticing that the standard deviation for these experiments is rather high, which means that the conversation do not converge to any "stereotypic" pattern. 6. CONCLUSION This paper has investigated the properties of a multiagent system where each (distributed) agent locally perceives its environment, and tries to reach consistency with other agents despite severe communication restrictions. In particular we have exhibited conditions allowing convergence, and experimentally investigated a typical situation where those conditions cannot hold. There are many possible extensions to this work, the first being to further investigate the properties of different global protocols belonging to the class we identified, and their influence on the outcome. There are in particular many heuristics, highly dependent on the context of the study, that could intuitively yield interesting results (in our study, selecting the recipient on the basis of what can be inferred from his observed actions could be such a heuristic). One obvious candidate for longer term issues concern the relaxation of the assumption of perfect sensing.
Hypotheses Refinement under Topological Communication Constraints * ABSTRACT We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment. Upon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations). However, we further assume that communication opportunities are severely constrained and change dynamically. In this paper, we mostly investigate the convergence of such systems towards global consistency. We first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange. As this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange. We study a critical situation involving a number of agents aiming at escaping from a burning building. The results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. 1. INTRODUCTION We consider a multiagent system where each (distributed) agent locally perceives its environment, and we assume that some unexpected event occurs in that system. If each agent computes only locally its favoured hypothesis, it is only natural to assume that agents will seek to coordinate and refine their hypotheses by confronting their observations with other agents. If, in addition, the communication opportunities are severely constrained (for instance, agents can only communicate when they are close enough to some other agent), and dynamically changing (for instance, agents may change their locations), it becomes crucial to carefully design protocols that will allow agents to converge to some desired state of global consistency. In this paper we exhibit some sufficient conditions on the system dynamics and on the protocol/strategy structures that allow to guarantee that property, and we experimentally study some contexts where (some of) these assumptions are relaxed. While problems of diagnosis are among the venerable classics in the AI tradition, their multiagent counterparts have much more recently attracted some attention. Roos and colleagues [8, 9] in particular study a situation where a number of distributed entities try to come up with a satisfying global diagnosis of the whole system. They show in particular that the number of messages required to establish this global diagnosis is bound to be prohibitive, unless the communication is enhanced with some suitable protocol. However, they do not put any restrictions on agents' communication options, and do not assume either that the system is dynamic. The benefits of enhancing communication with supporting information to make convergence to a desired global state of a system more efficient has often been put forward in the literature. This is for instance one of the main idea underlying the argumentation-based negotiation approach [7], where the desired state is a compromise between agents with conflicting preferences. Many of these works however make the assumption that this approach is beneficial to start with, and study the technical facets of the problem (or instead emphasize other advantages of using argumentation). Notable exceptions are the works of [3, 4, 2, 5], which studied in contexts different from ours the efficiency of argumentation. The rest of the paper is as follows. Section 2 specifies the basic elements of our model, and Section 3 goes on to presenting the different protocols and strategies used by the agents to exchange hypotheses and observations. We put special attention at clearly emphasizing the conditions on the system dynamics and protocols/strategies that will be exploited in the rest of the paper. Section 4 details one of the main results of the paper, namely the fact that under the aforementioned conditions, the constraints that we put on the topology will not prevent the convergence of the system towards global consistency, at the condition that no agent ever gets completely "lost" forever in the system, and that unlimited time is allowed for computation and argument exchange. While the conditions on protocols and strategies are fairly mild, it is also clear that these system requirements look much more problematic, even frankly unrealistic in critical situations where distributed approaches are precisely advocated. To get a clearer picture of the situation induced when time is a critical factor, we have set up an experimental framework that we introduce and discuss in Section 5. The critical situation involves a number of agents aiming at escaping from a burning building. The results reported here show that the effectiveness of argument exchange crucially depends upon the nature of the building, and provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. 2. BASIC NOTIONS DEFINITION 1 (SEQUENCE OF TIME POINTS). A se Agent Agent Cycle The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 999 Communication 3. PROTOCOLS AND STRATEGIES Local Protocol and Strategies Global Protocol Global Strategy 1000 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. (CONDITIONAL) CONVERGENCE TO GLOBAL CONSISTENCY The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1001 5. EXPERIMENTAL STUDY 1002 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Experimental Evaluation Effectiveness of a protocol 5.2 Experimental Settings 5.3 Experimental environments 5.4 Experimental Results The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1003 Effectiveness of Communication and Argumentation 1004 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 6. CONCLUSION This paper has investigated the properties of a multiagent system where each (distributed) agent locally perceives its environment, and tries to reach consistency with other agents despite severe communication restrictions. In particular we have exhibited conditions allowing convergence, and experimentally investigated a typical situation where those conditions cannot hold. There are many possible extensions to this work, the first being to further investigate the properties of different global protocols belonging to the class we identified, and their influence on the outcome. There are in particular many heuristics, highly dependent on the context of the study, that could intuitively yield interesting results (in our study, selecting the recipient on the basis of what can be inferred from his observed actions could be such a heuristic). One obvious candidate for longer term issues concern the relaxation of the assumption of perfect sensing.
Hypotheses Refinement under Topological Communication Constraints * ABSTRACT We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment. Upon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations). However, we further assume that communication opportunities are severely constrained and change dynamically. In this paper, we mostly investigate the convergence of such systems towards global consistency. We first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange. As this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange. We study a critical situation involving a number of agents aiming at escaping from a burning building. The results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. 1. INTRODUCTION We consider a multiagent system where each (distributed) agent locally perceives its environment, and we assume that some unexpected event occurs in that system. If each agent computes only locally its favoured hypothesis, it is only natural to assume that agents will seek to coordinate and refine their hypotheses by confronting their observations with other agents. In this paper we exhibit some sufficient conditions on the system dynamics and on the protocol/strategy structures that allow to guarantee that property, and we experimentally study some contexts where (some of) these assumptions are relaxed. Roos and colleagues [8, 9] in particular study a situation where a number of distributed entities try to come up with a satisfying global diagnosis of the whole system. They show in particular that the number of messages required to establish this global diagnosis is bound to be prohibitive, unless the communication is enhanced with some suitable protocol. However, they do not put any restrictions on agents' communication options, and do not assume either that the system is dynamic. The benefits of enhancing communication with supporting information to make convergence to a desired global state of a system more efficient has often been put forward in the literature. This is for instance one of the main idea underlying the argumentation-based negotiation approach [7], where the desired state is a compromise between agents with conflicting preferences. Many of these works however make the assumption that this approach is beneficial to start with, and study the technical facets of the problem (or instead emphasize other advantages of using argumentation). Notable exceptions are the works of [3, 4, 2, 5], which studied in contexts different from ours the efficiency of argumentation. The rest of the paper is as follows. Section 2 specifies the basic elements of our model, and Section 3 goes on to presenting the different protocols and strategies used by the agents to exchange hypotheses and observations. We put special attention at clearly emphasizing the conditions on the system dynamics and protocols/strategies that will be exploited in the rest of the paper. Section 4 details one of While the conditions on protocols and strategies are fairly mild, it is also clear that these system requirements look much more problematic, even frankly unrealistic in critical situations where distributed approaches are precisely advocated. To get a clearer picture of the situation induced when time is a critical factor, we have set up an experimental framework that we introduce and discuss in Section 5. The critical situation involves a number of agents aiming at escaping from a burning building. The results reported here show that the effectiveness of argument exchange crucially depends upon the nature of the building, and provide some insights regarding the design of optimal protocol for hypotheses refinement in this context. 6. CONCLUSION This paper has investigated the properties of a multiagent system where each (distributed) agent locally perceives its environment, and tries to reach consistency with other agents despite severe communication restrictions. In particular we have exhibited conditions allowing convergence, and experimentally investigated a typical situation where those conditions cannot hold. There are many possible extensions to this work, the first being to further investigate the properties of different global protocols belonging to the class we identified, and their influence on the outcome.
I-49
A Multilateral Multi-issue Negotiation Protocol
In this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context. We consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment. This information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents. In addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent.
[ "multi-issu negoti", "negoti protocol", "model", "cooper agent", "crisi manag", "decis make", "multilater negoti", "multi-agent system", "autom negoti", "myriad", "negoti strategi", "multi-criterion decis make" ]
[ "P", "P", "P", "P", "P", "M", "R", "U", "M", "U", "M", "M" ]
A Multilateral Multi-issue Negotiation Protocol Miniar Hemaissia THALES Research & Technology France RD 128 F-91767 Palaiseau Cedex, France miniar.hemaissia@lip6.fr Amal El Fallah Seghrouchni LIP6, University of Paris 6 8 rue du Capitaine Scott F-75015 Paris, France amal.elfallah@lip6.fr Christophe Labreuche and Juliette Mattioli THALES Research & Technology France RD 128 F-91767 Palaiseau Cedex, France ABSTRACT In this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context. We consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment. This information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents. In addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent agents, Multiagent systems General Terms Theory, Design, Experimentation 1. INTRODUCTION Multi-issue negotiation protocols represent an important field of study since negotiation problems in the real world are often complex ones involving multiple issues. To date, most of previous work in this area ([2, 3, 19, 13]) dealt almost exclusively with simple negotiations involving independent issues. However, real-world negotiation problems involve complex dependencies between multiple issues. When one wants to buy a car, for example, the value of a given car is highly dependent on its price, consumption, comfort and so on. The addition of such interdependencies greatly complicates the agents utility functions and classical utility functions, such as the weighted sum, are not sufficient to model this kind of preferences. In [10, 9, 17, 14, 20], the authors consider inter-dependencies between issues, most often defined with boolean values, except for [9], while we can deal with continuous and discrete dependent issues thanks to the modelling power of the Choquet integral. In [17], the authors deal with bilateral negotiation while we are interested in a multilateral negotiation setting. Klein et al. [10] present an approach similar to ours, using a mediator too and information about the strength of the approval or rejection that an agent makes during the negotiation. In our protocol, we use more precise information to improve the proposals thanks to the multi-criteria methodology and tools used to model the preferences of our agents. Lin, in [14, 20], also presents a mediation service but using an evolutionary algorithm to reach optimal solutions and as explained in [4], players in the evolutionary models need to repeatedly interact with each other until the stable state is reached. As the population size increases, the time it takes for the population to stabilize also increases, resulting in excessive computation, communication, and time overheads that can become prohibitive, and for one-to-many and many-to-many negotiations, the overheads become higher as the number of players increases. In [9], the authors consider a non-linear utility function by using constraints on the domain of the issues and a mediation service to find a combination of bids maximizing the social welfare. Our preference model, a nonlinear utility function too, is more complex than [9] one since the Choquet integral takes into account the interactions and the importance of each decision criteria/issue, not only the dependencies between the values of the issues, to determine the utility. We also use an iterative protocol enabling us to find a solution even when no bid combination is possible. In this paper, we propose a negotiation protocol suited for multiple agents with complex preferences and taking into account, at the same time, multiple interdependent issues and recommendations made by the agents to improve a proposal. Moreover, the preferences of our agents are modelled using a multi-criteria methodology and tools enabling us to take into account information about the improvements that can be made to a proposal, in order to help in accelerating the search for a consensus between the agents. Therefore, we propose a negotiation protocol consisting of solving our decision problem using a MAS with a multi-criteria decision aiding modelling at the agent level and a cooperation-based multilateral multi-issue negotiation protocol. This protocol is studied under a non-cooperative approach and it is shown 943 978-81-904262-7-5 (RPS) c 2007 IFAAMAS that it has subgame perfect equilibria, provided that agents behave rationally in the sense of von Neumann and Morgenstern. The approach proposed in this paper has been first introduced and presented in [8]. In this paper, we present our first experiments, with some noteworthy results, and a more complex multi-agent system with representatives to enable us to have a more robust system. In Section 2, we present our application, a crisis management problem. Section 3 deals with the general aspect of the proposed approach. The preference modelling is described in sect. 4, whereas the motivations of our protocol are considered in sect. 5 and the agent/multiagent modelling in sect. 6. Section 7 presents the formal modelling and properties of our protocol before presenting our first experiments in sect. 8. Finally, in Section 9, we conclude and present the future work. 2. CASE STUDY This protocol is applied to a crisis management problem. Crisis management is a relatively new field of management and is composed of three types of activities: crisis prevention, operational preparedness and management of declared crisis. The crisis prevention aims to bring the risk of crisis to an acceptable level and, when possible, avoid that the crisis actually happens. The operational preparedness includes strategic advanced planning, training and simulation to ensure availability, rapid mobilisation and deployment of resources to deal with possible emergencies. The management of declared crisis is the response to - including the evacuation, search and rescue - and the recovery from the crisis by minimising the effects of the crises, limiting the impact on the community and environment and, on a longer term, by bringing the community``s systems back to normal. In this paper, we focus on the response part of the management of declared crisis activity, and particularly on the evacuation of the injured people in disaster situations. When a crisis is declared, the plans defined during the operational preparedness activity are executed. For disasters, master plans are executed. These plans are elaborated by the authorities with the collaboration of civil protection agencies, police, health services, non-governmental organizations, etc.. When a victim is found, several actions follow. First, a rescue party is assigned to the victim who is examined and is given first aid on the spot. Then, the victims can be placed in an emergency centre on the ground called the medical advanced post. For all victims, a sorter physician - generally a hospital physician - examines the seriousness of their injuries and classifies the victims by pathology. The evacuation by emergency health transport if necessary can take place after these clinical examinations and classifications. Nowadays, to evacuate the injured people, the physicians contact the emergency call centre to pass on the medical assessments of the most urgent cases. The emergency call centre then searches for available and appropriate spaces in the hospitals to care for these victims. The physicians are informed of the allocations, so they can proceed to the evacuations choosing the emergency health transports according to the pathologies and the transport modes provided. In this context, we can observe that the evacuation is based on three important elements: the examination and classification of the victims, the search for an allocation and the transport. In the case of the 11 March 2004 Madrid attacks, for instance, some injured people did not receive the appropriate health care because, during the search for space, the emergency call centre did not consider the transport constraints and, in particular, the traffic. Therefore, for a large scale crisis management problem, there is a need to support the emergency call centre and the physicians in the dispatching to take into account the hospitals and the transport constraints and availabilities. 3. PROPOSED APPROACH To accept a proposal, an agent has to consider several issues such as, in the case of the crisis management problem, the availabilities in terms of number of beds by unit, medical and surgical staffs, theatres and so on. Therefore, each agent has its own preferences in correlation with its resource constraints and other decision criteria such as, for the case study, the level of congestion of a hospital. All the agents also make decisions by taking into account the dependencies between these decision criteria. The first hypothesis of our approach is that there are several parties involved in and impacted by the decision, and so they have to decide together according to their own constraints and decision criteria. Negotiation is the process by which a group facing a conflict communicates with one another to try and come to a mutually acceptable agreement or decision and so, the agents have to negotiate. The conflict we have to resolve is finding an acceptable solution for all the parties by using a particular protocol. In our context, multilateral negotiation is a negotiation protocol type that is the best suited for this type of problem : this type of protocol enables the hospitals and the physicians to negotiate together. The negotiation also deals with multiple issues. Moreover, an other hypothesis is that we are in a cooperative context where all the parties have a common objective which is to provide the best possible solution for everyone. This implies the use of a negotiation protocol encouraging the parties involved to cooperate as satisfying its preferences. Taking into account these aspects, a Multi-Agent System (MAS) seems to be a reliable method in the case of a distributed decision making process. Indeed, a MAS is a suitable answer when the solution has to combine, at least, distribution features and reasoning capabilities. Another motivation for using MAS lies in the fact that MAS is well known for facilitating automated negotiation at the operative decision making level in various applications. Therefore, our approach consists of solving a multiparty decision problem using a MAS with • The preferences of the agents are modelled using a multi-criteria decision aid tool, MYRIAD, also enabling us to consider multi-issue problems by evaluating proposals on several criteria. • A cooperation-based multilateral and multi-issue negotiation protocol. 4. THE PREFERENCE MODEL We consider a problem where an agent has several decision criteria, a set Nk = {1, ... , nk} of criteria for each agent k involved in the negotiation protocol. These decision criteria enable the agents to evaluate the set of issues that are negotiated. The issues correspond directly or not to the decision criteria. However, for the example of the crisis management 944 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) problem, the issues are the set of victims to dispatch between the hospitals. These issues are translated to decision criteria enabling the hospital to evaluate its congestion and so to an updated number of available beds, medical teams and so on. In order to take into account the complexity that exists between the criteria/issues, we use a multi-criteria decision aiding (MCDA) tool named MYRIAD [12] developed at Thales for MCDA applications based on a two-additive Choquet integral which is a good compromise between versatility and ease to understand and model the interactions between decision criteria [6]. The set of the attributes of Nk is denoted by Xk 1 , ... , Xk nk . All the attributes are made commensurate thanks to the introduction of partial utility functions uk i : Xk i → [0, 1]. The [0, 1] scale depicts the satisfaction of the agent k regarding the values of the attributes. An option x is identified to an element of Xk = Xk 1 × · · · × Xk nk , with x = (x1, ... , xnk ). Then the overall assessment of x is given by Uk(x) = Hk(uk 1 (x1), ... , uh nk (xnk )) (1) where Hk : [0, 1]nk → [0, 1] is the aggregation function. The overall preference relation over Xk is then x y ⇐⇒ Uk(x) ≥ Uk(y) . The two-additive Choquet integral is defined for (z1, ... , znk ) ∈ [0, 1]nk by [7] Hk(z1, ... , znk ) = X i∈Nk 0 @vk i − 1 2 X j=i |Ik i,j| 1 A zi + X Ik i,j >0 Ik i,j zi ∧ zj + X Ii,j <0 |Ii,j| zi ∨ zj (2) where vk i is the relative importance of criterion i for agent k and Ik i,j is the interaction between criteria i and j, ∧ and ∨ denote the min and max functions respectively. Assume that zi < zj. A positive interaction between criteria i and j depicts complementarity between these criteria (positive synergy) [7]. Hence, the lower score of z on criterion i conceals the positive effect of the better score on criterion j to a larger extent on the overall evaluation than the impact of the relative importance of the criteria taken independently of the other ones. In other words, the score of z on criterion j is penalized by the lower score on criterion i. Conversely, a negative interaction between criteria i and j depicts substitutability between these criteria (negative synergy) [7]. The score of z on criterion i is then saved by a better score on criterion j. In MYRIAD, we can also obtain some recommendations corresponding to an indicator ωC (H, x) measuring the worth to improve option x w.r.t. Hk on some criteria C ⊆ Nk as follows ωC (Hk, x)= Z 1 0 Hk ` (1 − τ)xC + τ, xNk\C ´ − Hk(x) EC (τ, x) dτ where ((1−τ)xC +τ, xNk\C ) is the compound act that equals (1 − τ)xi + τ if i ∈ C and equals xi if i ∈ Nk \ C. Moreover, EC (τ, x) is the effort to go from the profile x to the profile ((1 − τ)xC + τ, xNk\C ). Function ωC (Hk, x) depicts the average improvement of Hk when the criteria of coalition A range from xC to 1C divided by the average effort needed for this improvement. We generally assume that EC is of order 1, that is EC (τ, x) = τ P i∈C (1 − xi). The expression of ωC (Hk, x) when Hk is a Choquet integral, is given in [11]. The agent is then recommended to improve of coalition C for which ωC (Hk, x) is maximum. This recommendation is very useful in a negotiation protocol since it helps the agents to know what to do if they want an offer to be accepted while not revealing their own preference model. 5. PROTOCOL MOTIVATIONS For multi-issue problems, there are two approaches: a complete package approach where the issues are negotiated simultaneously in opposition to the sequential approach where the issues are negotiated one by one. When the issues are dependant, then it is the best choice to bargain simultaneously over all issues [5]. Thus, the complete package is the adopted approach so that an offer will be on the overall set of injured people while taking into account the other decision criteria. We have to consider that all the parties of the negotiation process have to agree on the decision since they are all involved in and impacted by this decision and so an unanimous agreement is required in the protocol. In addition, no party can leave the process until an agreement is reached, i.e. a consensus achieved. This makes sense since a proposal concerns all the parties. Moreover, we have to guarantee the availability of the resources needed by the parties to ensure that a proposal is realistic. To this end, the information about these availabilities are used to determine admissible proposals such that an offer cannot be made if one of the parties has not enough resources to execute/achieve it. At the beginning of the negotiation, each party provides its maximum availabilities, this defining the constraints that have to be satisfied for each offer submitted. The negotiation has also to converge quickly on an unanimous agreement. We decided to introduce in the negotiation protocol an incentive to cooperate taking into account the passed negotiation time. This incentive is defined on the basis of a time dependent penalty, the discounting factor as in [18] or a time-dependent threshold. This penalty has to be used in the accept/reject stage of our consensus procedure. In fact, in the case of a discounting factor, each party will accept or reject an offer by evaluating the proposal using its utility function deducted from the discounting factor. In the case of a time-dependent threshold, if the evaluation is greater or equal to this threshold, the offer is accepted, otherwise, in the next period, its threshold is reduced. The use of a penalty is not enough alone since it does not help in finding a solution. Some information about the assessments of the parties involved in the negotiation are needed. In particular, it would be helpful to know why an offer has been rejected and/or what can be done to make a proposal that would be accepted. MYRIAD provides an analysis that determines the flaws an option, here a proposal. In particular, it gives this type of information: which criteria of a proposal should be improved so as to reach the highest possible overall evaluation [11]. As we use this tool to model the parties involved in the negotiation, the information about the criteria to improve can be used by the mediator to elaborate the proposals. We also consider that the dual function can be used to take into account another type of information: on which criteria of a proposal, no improvement is necessary so that the overall evaluation of a proposal is still acceptable, do not decrease. Thus, all information is a constraint to be satisfied as much as possible by The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 945 Figure 1: An illustration of some system. the parties to make a new proposal. We are in a cooperative context and revealing one``s opinion on what can be improved is not prohibited, on the contrary, it is useful and recommended here seeing that it helps in converging on an agreement. Therefore, when one of the parties refuses an offer, some information will be communicated. In order to facilitate and speed up the negotiation, we introduce a mediator. This specific entity is in charge of making the proposals to the other parties in the system by taking into account their public constraints (e.g. their availabilities) and the recommendations they make. This mediator can also be considered as the representative of the general interest we can have, in some applications, such as in the crisis management problem, the physician will be the mediator and will also have some more information to consider when making an offer (e.g. traffic state, transport mode and time). Each party in a negotiation N, a negotiator, can also be a mediator of another negotiation N , this party becoming the representative of N in the negotiation N, as illustrated by fig. 1 what can also help in reducing the communication time. 6. AGENTIFICATION How the problem is transposed in a MAS problem is a very important aspect when designing such a system. The agentification has an influence upon the systems efficiency in solving the problem. Therefore, in this section, we describe the elements and constraints taken into account during the modelling phase and for the model itself. However, for this negotiation application, the modelling is quite natural when one observes the negotiation protocol motivations and main properties. First of all, it seems obvious that there should be one agent for each player of our multilateral multi-issue negotiation protocol. The agents have the involved parties'' information and preferences. These agents are: • Autonomous: they decide for themselves what, when and under what conditions actions should be performed; • Rational: they have a means-ends competence to fit its decisions, according to its knowledge, preferences and goal; • Self-interested: they have their own interests which may conflict with the interests of other agents. Moreover, their preferences are modelled and a proposal evaluated and analysed using MYRIAD. Each agent has private information and can access public information as knowledge. In fact, there are two types of agents: the mediator type for the agents corresponding to the mediator of our negotiation protocol, the delegated physician in our application, and the negotiator type for the agents corresponding to the other parties, the hospitals. The main behaviours that an agent of type mediator needs to negotiate in our protocol are the following: • convert_improvements: converts the information given by the other agents involved in the negotiation about the improvements to be done, into constraints on the next proposal to be made; • convert_no_decrease: converts the information given by the other agents involved in the negotiation about the points that should not be changed into constraints on the next proposal to be made; • construct_proposal: constructs a new proposal according to the constraints obtained with convert_improvements, convert_no_decrease and the agent preferences; The main behaviours that an agent of type negotiator needs to negotiate in our protocol are the following: • convert_proposal: converts a proposal to a MYRIAD option of the agent according to its preferences model and its private data; • convert_improvements_wc: converts the agent recommendations for the improvements of a MYRIAD option into general information on the proposal; • convert_no_decrease_wc: converts the agent recommendations about the criteria that should not be changed in the MYRIAD option into general information on the proposal; In addition to these behaviours, there are, for the two types of agents, access behaviours to MYRIAD functionalities such as the evaluation and improvement functions: • evaluate_option: evaluates the MYRIAD option obtained using the agent behaviour convert_proposal; • improvements: gets the agent recommendations to improve a proposal from the MYRIAD option; • no_decrease: gets the agent recommendations to not change some criteria from the MYRIAD option; Of course, before running the system with such agents, we must have defined each party preferences model in MYRIAD. This model has to be part of the agent so that it could be used to make the assessments and to retrieve the improvements. In addition to these behaviours, the communication acts between the agents is as follows. 1. mediator agent communication acts: 946 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) m1 m 1 1 m inform1 m mediator negotiator accept−proposal l 1 accept−proposal m−l reject−proposal propose propose Figure 2: The protocol diagram in AUML, and where m is the number of negotiator agents and l is the number of agents refusing current proposal. (a) propose: sends a message containing a proposal to all negotiator agents; (b) inform: sends a message to all negotiator agents to inform them that an agreement has been reached and containing the consensus outcome. 2. negotiator agent communication acts: (a) accept-proposal: sends a message to the mediator agent containing the agent recommendations to improve the proposal and obtained with convert_improvements_wc; (b) reject-proposal: sends a message to the mediator agent containing the agent recommendations about the criteria that should not be changed and obtained with convert_no_decrease_wc. Such agents are interchangeable, in a case of failure, since they all have the same properties and represent a user with his preference model, not depending on the agent, but on the model defined in MYRIAD. When the issues and the decision criteria are different from each other, the information about the criteria improvement have to be pre-processed to give some instructions on the directions to take and about the negotiated issues. It is the same for the evaluation of a proposal: each agent has to convert the information about the issues to update its private information and to obtain the values of each attribute of the decision criteria. 7. OUR PROTOCOL Formally, we consider negotiations where a set of players A = {1, 2, ... , m} and a player a are negotiating over a set Q of size q. The player a is the protocol mediator, the mediator agent of the agentification. The utility/preference function of a player k ∈ A ∪ {a} is Uk, defined using MYRIAD, as presented in Section 4, with a set Nk of criteria, Xk an option, and so on. An offer is a vector P = (P1, P2, · · · , Pm), a partition of Q, in which Pk is player k``s share of Q. We have P ∈ P where P is the set of admissible proposals, a finite set. Note that P is determined using all players general constraints on the proposals and Q. Moreover, let ˜P denote a particular proposal defined as a``s preferred proposal. We also have the following notation: δk is the threshold decrease factor of player k, Φk : Pk → Xk is player k``s function to convert a proposal to an option and Ψk is the function indicating which points P has to be improved, with Ψk its dual function - on which points no improvement is necessary. Ψk is obtained using the dual function of ωC (Hk, x): eωC (Hk, x)= Z 1 0 Hk(x) − Hk ` τ xC , xNk\C ´ eEC (τ, x) dτ Where eEC (τ, x) is the cost/effort to go from (τxC , xNk\C ) to x. In period t of our consensus procedure, player a proposes an agreement P. All players k ∈ A respond to a by accepting or rejecting P. The responses are made simultaneously. If all players k ∈ A accept the offer, the game ends. If any player k rejects P, then the next period t+1 begins: player a makes another proposal P by taking into account information provided by the players and the ones that have rejected P apply a penalty. Therefore, our negotiation protocol can be as follows: Protocol P1. • At the beginning, we set period t = 0 • a makes a proposal P ∈ P that has not been proposed before. • Wait that all players of A give their opinion Yes or No to the player a. If all players agree on P, this later is chosen. Otherwise t is incremented and we go back to previous point. • If there is no more offer left from P, the default offer ˜P will be chosen. • The utility of players regarding a given offer decreases over time. More precisely, the utility of player k ∈ A at period t regarding offer P is Uk(Φk(Pk), t) = ft(Uk(Φk(Pk))), where one can take for instance ft(x) = x.(δk)t or ft(x) = x − δk.t, as penalty function. Lemma 1. Protocol P1 has at least one subgame perfect equilibrium 1 . Proof : Protocol P1 is first transformed in a game in extensive form. To this end, one shall specify the order in which the responders A react to the offer P of a. However the order in which the players answer has no influence on the course of the game and in particular on their personal utility. Hence protocol P1 is strictly equivalent to a game in 1 A subgame perfect equilibrium is an equilibrium such that players'' strategies constitute a Nash equilibrium in every subgame of the original game [18, 16]. A Nash equilibrium is a set of strategies, one for each player, such that no player has incentive to unilaterally change his/her action [15]. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 947 extensive form, considering any order of the players A. This game is clearly finite since P is finite and each offer can only be proposed once. Finally P1 corresponds to a game with perfect information. We end the proof by using a classical result stating that any finite game in extensive form with perfect information has at least one subgame perfect equilibrium (see e.g. [16]). Rational players (in the sense of von Neumann and Morgenstern) involved in protocol P1 will necessarily come up with a subgame perfect equilibrium. Example 1. Consider an example with A = {1, 2} and P = {P1 , P2 , P3 } where the default offer is P1 . Assume that ft(x) = x − 0.1 t. Consider the following table giving the utilities at t = 0. P1 P2 P3 a 1 0.8 0.7 1 0.1 0.7 0.5 2 0.1 0.3 0.8 It is easy to see that there is one single subgame perfect equilibrium for protocol P1 corresponding to these values. This equilibrium consists of the following choices: first a proposes P3 ; player 1 rejects this offer; a proposes then P2 and both players 1 and 2 accepts otherwise they are threatened to receive the worse offer P1 for them. Finally offer P2 is chosen. Option P1 is the best one for a but the two other players vetoed it. It is interesting to point out that, even though a prefers P2 to P3 , offer P3 is first proposed and this make P2 being accepted. If a proposes P2 first, then the subgame perfect equilibrium in this situation is P3 . To sum up, the worse preferred options have to be proposed first in order to get finally the best one. But this entails a waste of time. Analysing the previous example, one sees that the game outcome at the equilibrium is P2 that is not very attractive for player 2. Option P3 seems more balanced since no player judges it badly. It could be seen as a better solution as a consensus among the agents. In order to introduce this notion of balanceness in the protocol, we introduce a condition under which a player will be obliged to accept the proposal, reducing the autonomy of the agents but for increasing rationality and cooperation. More precisely if the utility of a player is larger than a given threshold then acceptance is required. The threshold decreases over time so that players have to make more and more concession. Therefore, the protocol becomes as follows. Protocol P2. • At the beginning we set period t = 0 • a makes a proposal P ∈ P that has not been proposed before. • Wait that all players of A give their opinion Yes or No to the player a. A player k must accept the offer if Uk(Φk(Pk)) ≥ ρk(t) where ρk(t) tends to zero when t grows. Moreover there exists T such that for all t ≥ T, ρk(t) = 0. If all players agree on P, this later is chosen. Otherwise t is incremented and we go back to previous point. • If there is no more offer left from P, the default offer ˜P will be chosen. One can show exactly as in Lemma 1 that protocol P2 has at least one subgame perfect equilibrium. We expect that protocol P2 provides a solution not to far from P , so it favours fairness among the players. Therefore, our cooperation-based multilateral multi-issue protocol is the following: Protocol P. • At the beginning we set period t = 0 • a makes a proposal P ∈ P that has not been proposed before, considering Ψk(Pt ) and Ψk(Pt ) for all players k ∈ A. • Wait that all players of A give their opinion (Yes , Ψk(Pt )) or (No , Ψk(Pt )) to the player a. A player k must accept the offer if Uk(Φk(Pk)) ≥ ρk(t) where ρk(t) tends to zero when t grows. Moreover there exists T such that for all t ≥ T, ρk(t) = 0. If all players agree on P, this later is chosen. Otherwise t is incremented and we go back to previous point. • If there is no more offer left from P, the default offer ˜P will be chosen. 8. EXPERIMENTS We developed a MAS using the widely used JADE agent platform [1]. This MAS is designed to be as general as possible (e.g. a general framework to specialise according to the application) and enable us to make some preliminary experiments. The experiments aim at verifying that our approach gives solutions as close as possible to the Maximin solution and in a small number of rounds and hopefully in a short time since our context is highly cooperative. We defined the two types of agents and their behaviours as introduced in section 6. The agents and their behaviours correspond to the main classes of our prototype, NegotiatorAgent and NegotiatorBehaviour for the negotiator agents, and MediatorAgent and MediatorBehaviour for the mediator agent. These classes extend JADE classes and integrate MYRIAD into the agents, reducing the amount of communications in the system. Some functionalities depending on the application have to be implemented according to the application by extending these classes. In particular, all conversion parts of the agents have to be specified according to the application since to convert a proposal into decision criteria, we need to know, first, this model and the correlations between the proposals and this model. First, to illustrate our protocol, we present a simple example of our dispatch problem. In this example, we have three hospitals, H1, H2 and H3. Each hospital can receive victims having a particular pathology in such a way that H1 can receive patients with the pathology burn, surgery or orthopedic, H2 can receive patients with the pathology surgery, orthopedic or cardiology and H3 can receive patients with the pathology burn or cardiology. All the hospitals have similar decision criteria reflecting their preferences on the level of congestion they can face for the overall hospital and the different services available, as briefly explained for hospital H1 hereafter. For hospital H1, the preference model, fig. 3, is composed of five criteria. These criteria correspond to the preferences on the pathologies the hospital can treat. In the case of 948 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: The H1 preference model in MYRIAD. the pathology burn, the corresponding criterion, also named burn as shown in fig. 3, represents the preferences of H1 according to the value of Cburn which is the current capacity of burn. Therefore, the utility function of this criterion represents a preference such that the more there are patients of this pathology in the hospital, the less the hospital may satisfy them, and this with an initial capacity. In addition to reflecting this kind of viewpoint, the aggregation function as defined in MYRIAD introduces a veto on the criteria burn, surgery, orthopedic and EReceipt, where EReceipt is the criterion for the preferences about the capacity to receive a number of patients at the same time. In this simplified example, the physician have no particular preferences on the dispatch and the mediator agent chooses a proposal randomly in a subset of the set of admissibility. This subset have to satisfy as much as possible the recommendations made by the hospitals. To solve this problem, for this example, we decided to solve a linear problem with the availability constraints and the recommendations as linear constraints on the dispatch values. The set of admissibility is then obtained by solving this linear problem by the use of Prolog. Moreover, only the recommendations on how to improve a proposal are taken into account. The problem to solve is then to dispatch to hospital H1, H2 and H3, the set of victims composed of 5 victims with the pathology burn, 10 with surgery, 3 with orthopedic and 7 with cardiology. The availabilities of the hospitals are as presented in the following table. Available Overall burn surg. orthop. cardio. H1 11 4 8 10H2 25 - 3 4 10 H3 7 10 - - 3 We obtain a multiagent system with the mediator agent and three agents of type negotiator for the three hospital in the problem. The hospitals threshold are fixed approximatively to the level where an evaluation is considered as good. To start, the negotiator agents send their availabilities. The mediator agent makes a proposal chosen randomly in admissible set obtained with these availabilities as linear constraints. This proposal is the vector P0 = [[H1,burn, 3], [H1, surgery, 8], [H1, orthopaedic, 0], [H2, surgery, 2], [H2, orthopaedic, 3], [H2, cardiology, 6], [H3, burn, 2], [H3, cardiology, 1]] and the mediator sends propose(P0) to H1, H2 and H3 for approval. Each negotiator agent evaluates this proposal and answers back by accepting or rejecting P0: • Agent H1 rejects this offer since its evaluation is very far from the threshold (0.29, a bad score) and gives a recommendation to improve burn and surgery by sending the message reject_proposal([burn,surgery]); • Agent H2 accepts this offer by sending the message accept_proposal(), the proposal evaluation being good; • Agent H3 accepts P0 by sending the message accept_ proposal(), the proposal evaluation being good. Just with the recommendations provided by agent H1, the mediator is able to make a new proposal by restricting the value of burn and surgery. The new proposal obtained is then P1 = [[H1,burn, 0], [H1, surgery, 8], [H1, orthopaedic, 1], [H2, surgery, 2], [H2, orthopaedic, 2], [H2, cardiology, 6], [H3, burn, 5], [H3, cardiology, 1]]. The mediator sends propose(P1) the negotiator agents. H1, H2 and H3 answer back by sending the message accept_proposal(), P1 being evaluated with a high enough score to be acceptable, and also considered as a good proposal when using the explanation function of MYRIAD. An agreement is reached with P1. Note that the evaluation of P1 by H3 has decreased in comparison with P0, but not enough to be rejected and that this solution is the Pareto one, P∗ . Other examples have been tested with the same settings: issues in IN, three negotiator agents and the same mediator agent, with no preference model but selecting randomly the proposal. We obtained solutions either equal or close to the Maximin solution, the distance from the standard deviation being less than 0.0829, the evaluations not far from the ones obtained with P∗ and with less than seven proposals made. This shows us that we are able to solve this multi-issue multilateral negotiation problem in a simple and efficient way, with solutions close to the Pareto solution. 9. CONCLUSION AND FUTURE WORK This paper presents a new protocol to address multilateral multi-issue negotiation in a cooperative context. The first main contribution is that we take into account complex inter-dependencies between multiple issues with the use of a complex preference modelling. This contribution is reinforced by the use of multi-issue negotiation in a multilateral context. Our second contribution is the use of sharp recommendations in the protocol to help in accelerating the search of a consensus between the cooperative agents and in finding an optimal solution. We have also shown that the protocol has subgame perfect equilibria and these equilibria converge to the usual maximum solution. Moreover, we tested this protocol in a crisis management context where the negotiation aim is where to evacuate a whole set of injured people to predefined hospitals. We have already developed a first MAS, in particular integrating MYRIAD, to test this protocol in order to know more about its efficiency in terms of solution quality and quickness in finding a consensus. This prototype enabled us to solve some examples with our approach and the results we obtained are encouraging since we obtained quickly good agreements, close to the Pareto solution, in the light of the initial constraints of the problem: the availabilities. We still have to improve our MAS by taking into account The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 949 the two types of recommendations and by adding a preference model to the mediator of our system. Moreover, a comparative study has to be done in order to evaluate the performance of our framework against the existing ones and against some variations on the protocol. 10. ACKNOWLEDGEMENT This work is partly funded by the ICIS research project under the Dutch BSIK Program (BSIK 03024). 11. REFERENCES [1] JADE. http://jade.tilab.com/. [2] P. Faratin, C. Sierra, and N. R. Jennings. Using similarity criteria to make issue trade-offs in automated negotiations. Artificial Intelligence, 142(2):205-237, 2003. [3] S. S. Fatima, M. Wooldridge, and N. R. Jennings. Optimal negotiation of multiple issues in incomplete information settings. In 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS``04), pages 1080-1087, New York, USA, 2004. [4] S. S. Fatima, M. Wooldridge, and N. R. Jennings. A comparative study of game theoretic and evolutionary models of bargaining for software agents. Artificial Intelligence Review, 23:185-203, 2005. [5] S. S. Fatima, M. Wooldridge, and N. R. Jennings. On efficient procedures for multi-issue negotiation. In 8th International Workshop on Agent-Mediated Electronic Commerce(AMEC``06), pages 71-84, Hakodate, Japan, 2006. [6] M. Grabisch. The application of fuzzy integrals in multicriteria decision making. European J. of Operational Research, 89:445-456, 1996. [7] M. Grabisch, T. Murofushi, and M. Sugeno. Fuzzy Measures and Integrals. Theory and Applications (edited volume). Studies in Fuzziness. Physica Verlag, 2000. [8] M. Hemaissia, A. El Fallah-Seghrouchni, C. Labreuche, and J. Mattioli. Cooperation-based multilateral multi-issue negotiation for crisis management. In 2th International Workshop on Rational, Robust and Secure Negotiation (RRS``06), pages 77-95, Hakodate, Japan, May 2006. [9] T. Ito, M. Klein, and H. Hattori. A negotiation protocol for agents with nonlinear utility functions. In AAAI, 2006. [10] M. Klein, P. Faratin, H. Sayama, and Y. Bar-Yam. Negotiating complex contracts. Group Decision and Negotiation, 12:111-125, March 2003. [11] C. Labreuche. Determination of the criteria to be improved first in order to improve as much as possible the overall evaluation. In IPMU 2004, pages 609-616, Perugia, Italy, 2004. [12] C. Labreuche and F. Le Hu´ed´e. MYRIAD: a tool suite for MCDA. In EUSFLAT``05, pages 204-209, Barcelona, Spain, 2005. [13] R. Y. K. Lau. Towards genetically optimised multi-agent multi-issue negotiations. In Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS``05), Big Island, Hawaii, 2005. [14] R. J. Lin. Bilateral multi-issue contract negotiation for task redistribution using a mediation service. In Agent Mediated Electronic Commerce VI (AMEC``04), New York, USA, 2004. [15] J. F. Nash. Non cooperative games. Annals of Mathematics, 54:286-295, 1951. [16] G. Owen. Game Theory. Academic Press, New York, 1995. [17] V. Robu, D. J. A. Somefun, and J. A. L. Poutr´e. Modeling complex multi-issue negotiations using utility graphs. In 4th International Joint Conference on Autonomous agents and multiagent systems (AAMAS``05), pages 280-287, 2005. [18] A. Rubinstein. Perfect equilibrium in a bargaining model. Econometrica, 50:97-109, jan 1982. [19] L.-K. Soh and X. Li. Adaptive, confidence-based multiagent negotiation strategy. In 3rd International Joint Conference on Autonomous agents and multiagent systems (AAMAS``04), pages 1048-1055, Los Alamitos, CA, USA, 2004. [20] H.-W. Tung and R. J. Lin. Automated contract negotiation using a mediation service. In 7th IEEE International Conference on E-Commerce Technology (CEC``05), pages 374-377, Munich, Germany, 2005. 950 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
A Multilateral Multi-issue Negotiation Protocol ABSTRACT In this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context. We consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment. This information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents. In addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent. 1. INTRODUCTION Multi-issue negotiation protocols represent an important field of study since negotiation problems in the real world are often complex ones involving multiple issues. To date, most of previous work in this area ([2, 3, 19, 13]) dealt almost exclusively with simple negotiations involving independent issues. However, real-world negotiation problems involve complex dependencies between multiple issues. When one wants to buy a car, for example, the value of a given car is highly dependent on its price, consumption, comfort and so on. The addition of such interdependencies greatly complicates the agents utility functions and classical utility functions, such as the weighted sum, are not sufficient to model this kind of preferences. In [10, 9, 17, 14, 20], the authors consider inter-dependencies between issues, most often defined with boolean values, except for [9], while we can deal with continuous and discrete dependent issues thanks to the modelling power of the Choquet integral. In [17], the authors deal with bilateral negotiation while we are interested in a multilateral negotiation setting. Klein et al. [10] present an approach similar to ours, using a mediator too and information about the strength of the approval or rejection that an agent makes during the negotiation. In our protocol, we use more precise information to improve the proposals thanks to the multi-criteria methodology and tools used to model the preferences of our agents. Lin, in [14, 20], also presents a mediation service but using an evolutionary algorithm to reach optimal solutions and as explained in [4], players in the evolutionary models need to repeatedly interact with each other until the stable state is reached. As the population size increases, the time it takes for the population to stabilize also increases, resulting in excessive computation, communication, and time overheads that can become prohibitive, and for one-to-many and many-to-many negotiations, the overheads become higher as the number of players increases. In [9], the authors consider a non-linear utility function by using constraints on the domain of the issues and a mediation service to find a combination of bids maximizing the social welfare. Our preference model, a nonlinear utility function too, is more complex than [9] one since the Choquet integral takes into account the interactions and the importance of each decision criteria/issue, not only the dependencies between the values of the issues, to determine the utility. We also use an iterative protocol enabling us to find a solution even when no bid combination is possible. In this paper, we propose a negotiation protocol suited for multiple agents with complex preferences and taking into account, at the same time, multiple interdependent issues and recommendations made by the agents to improve a proposal. Moreover, the preferences of our agents are modelled using a multi-criteria methodology and tools enabling us to take into account information about the improvements that can be made to a proposal, in order to help in accelerating the search for a consensus between the agents. Therefore, we propose a negotiation protocol consisting of solving our decision problem using a MAS with a multi-criteria decision aiding modelling at the agent level and a cooperation-based multilateral multi-issue negotiation protocol. This protocol is studied under a non-cooperative approach and it is shown that it has subgame perfect equilibria, provided that agents behave rationally in the sense of von Neumann and Morgenstern. The approach proposed in this paper has been first introduced and presented in [8]. In this paper, we present our first experiments, with some noteworthy results, and a more complex multi-agent system with representatives to enable us to have a more robust system. In Section 2, we present our application, a crisis management problem. Section 3 deals with the general aspect of the proposed approach. The preference modelling is described in sect. 4, whereas the motivations of our protocol are considered in sect. 5 and the agent/multiagent modelling in sect. 6. Section 7 presents the formal modelling and properties of our protocol before presenting our first experiments in sect. 8. Finally, in Section 9, we conclude and present the future work. 2. CASE STUDY This protocol is applied to a crisis management problem. Crisis management is a relatively new field of management and is composed of three types of activities: crisis prevention, operational preparedness and management of declared crisis. The crisis prevention aims to bring the risk of crisis to an acceptable level and, when possible, avoid that the crisis actually happens. The operational preparedness includes strategic advanced planning, training and simulation to ensure availability, rapid mobilisation and deployment of resources to deal with possible emergencies. The management of declared crisis is the response to--including the evacuation, search and rescue--and the recovery from the crisis by minimising the effects of the crises, limiting the impact on the community and environment and, on a longer term, by bringing the community's systems back to normal. In this paper, we focus on the response part of the management of declared crisis activity, and particularly on the evacuation of the injured people in disaster situations. When a crisis is declared, the plans defined during the operational preparedness activity are executed. For disasters, master plans are executed. These plans are elaborated by the authorities with the collaboration of civil protection agencies, police, health services, non-governmental organizations, etc. . When a victim is found, several actions follow. First, a rescue party is assigned to the victim who is examined and is given first aid on the spot. Then, the victims can be placed in an emergency centre on the ground called the medical advanced post. For all victims, a sorter physician--generally a hospital physician--examines the seriousness of their injuries and classifies the victims by pathology. The evacuation by emergency health transport if necessary can take place after these clinical examinations and classifications. Nowadays, to evacuate the injured people, the physicians contact the emergency call centre to pass on the medical assessments of the most urgent cases. The emergency call centre then searches for available and appropriate spaces in the hospitals to care for these victims. The physicians are informed of the allocations, so they can proceed to the evacuations choosing the emergency health transports according to the pathologies and the transport modes provided. In this context, we can observe that the evacuation is based on three important elements: the examination and classification of the victims, the search for an allocation and the transport. In the case of the 11 March 2004 Madrid attacks, for instance, some injured people did not receive the appropriate health care because, during the search for space, the emergency call centre did not consider the transport constraints and, in particular, the traffic. Therefore, for a large scale crisis management problem, there is a need to support the emergency call centre and the physicians in the dispatching to take into account the hospitals and the transport constraints and availabilities. 3. PROPOSED APPROACH To accept a proposal, an agent has to consider several issues such as, in the case of the crisis management problem, the availabilities in terms of number of beds by unit, medical and surgical staffs, theatres and so on. Therefore, each agent has its own preferences in correlation with its resource constraints and other decision criteria such as, for the case study, the level of congestion of a hospital. All the agents also make decisions by taking into account the dependencies between these decision criteria. The first hypothesis of our approach is that there are several parties involved in and impacted by the decision, and so they have to decide together according to their own constraints and decision criteria. Negotiation is the process by which a group facing a conflict communicates with one another to try and come to a mutually acceptable agreement or decision and so, the agents have to negotiate. The conflict we have to resolve is finding an acceptable solution for all the parties by using a particular protocol. In our context, multilateral negotiation is a negotiation protocol type that is the best suited for this type of problem: this type of protocol enables the hospitals and the physicians to negotiate together. The negotiation also deals with multiple issues. Moreover, an other hypothesis is that we are in a cooperative context where all the parties have a common objective which is to provide the best possible solution for everyone. This implies the use of a negotiation protocol encouraging the parties involved to cooperate as satisfying its preferences. Taking into account these aspects, a Multi-Agent System (MAS) seems to be a reliable method in the case of a distributed decision making process. Indeed, a MAS is a suitable answer when the solution has to combine, at least, distribution features and reasoning capabilities. Another motivation for using MAS lies in the fact that MAS is well known for facilitating automated negotiation at the operative decision making level in various applications. Therefore, our approach consists of solving a multiparty decision problem using a MAS with • The preferences of the agents are modelled using a multi-criteria decision aid tool, MYRIAD, also enabling us to consider multi-issue problems by evaluating proposals on several criteria. • A cooperation-based multilateral and multi-issue negotiation protocol. 4. THE PREFERENCE MODEL We consider a problem where an agent has several decision criteria, a set Nk = {1,..., nk} of criteria for each agent k involved in the negotiation protocol. These decision criteria enable the agents to evaluate the set of issues that are negotiated. The issues correspond directly or not to the decision criteria. However, for the example of the crisis management 944 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) problem, the issues are the set of victims to dispatch between the hospitals. These issues are translated to decision criteria enabling the hospital to evaluate its congestion and so to an updated number of available beds, medical teams and so on. In order to take into account the complexity that exists between the criteria/issues, we use a multi-criteria decision aiding (MCDA) tool named MYRIAD [12] developed at Thales for MCDA applications based on a two-additive Choquet integral which is a good compromise between versatility and ease to understand and model the interactions between decision criteria [6]. The set of the attributes of Nk is denoted by Xk1,..., Xknk. All the attributes are made commensurate thanks to the introduction of partial utility functions uki: Xki → [0, 1]. The [0, 1] scale depicts the satisfaction of the agent k regarding the values of the attributes. An option x is identified to an element of Xk = Xk1 × · · · × Xknk, with x = (x1,..., xnk). Then the overall assessment of x is given by where Hk: [0, 1] nk → [0, 1] is the aggregation function. The overall preference relation ~ over Xk is then where vki is the relative importance of criterion i for agent k and Iki, j is the interaction between criteria i and j, ∧ and ∨ denote the min and max functions respectively. Assume that zi <zj. A positive interaction between criteria i and j depicts complementarity between these criteria (positive synergy) [7]. Hence, the lower score of z on criterion i conceals the positive effect of the better score on criterion j to a larger extent on the overall evaluation than the impact of the relative importance of the criteria taken independently of the other ones. In other words, the score of z on criterion j is penalized by the lower score on criterion i. Conversely, a negative interaction between criteria i and j depicts substitutability between these criteria (negative synergy) [7]. The score of z on criterion i is then saved by a better score on criterion j. In MYRIAD, we can also obtain some recommendations corresponding to an indicator ωC (H, x) measuring the worth to improve option x w.r.t. Hk on some criteria C ⊆ Nk as follows erage improvement of Hk when the criteria of coalition A range from xC to 1C divided by the average effort needed order 1, that is EC (τ, x) = τ E for this improvement. We generally assume that EC is of i ∈ C (1 − xi). The expression of ωC (Hk, x) when Hk is a Choquet integral, is given in [11]. The agent is then recommended to improve of coalition C for which ωC (Hk, x) is maximum. This recommendation is very useful in a negotiation protocol since it helps the agents to know what to do if they want an offer to be accepted while not revealing their own preference model. 5. PROTOCOL MOTIVATIONS For multi-issue problems, there are two approaches: a complete package approach where the issues are negotiated simultaneously in opposition to the sequential approach where the issues are negotiated one by one. When the issues are dependant, then it is the best choice to bargain simultaneously over all issues [5]. Thus, the complete package is the adopted approach so that an offer will be on the overall set of injured people while taking into account the other decision criteria. We have to consider that all the parties of the negotiation process have to agree on the decision since they are all involved in and impacted by this decision and so an unanimous agreement is required in the protocol. In addition, no party can leave the process until an agreement is reached, i.e. a consensus achieved. This makes sense since a proposal concerns all the parties. Moreover, we have to guarantee the availability of the resources needed by the parties to ensure that a proposal is realistic. To this end, the information about these availabilities are used to determine admissible proposals such that an offer cannot be made if one of the parties has not enough resources to execute/achieve it. At the beginning of the negotiation, each party provides its maximum availabilities, this defining the constraints that have to be satisfied for each offer submitted. The negotiation has also to converge quickly on an unanimous agreement. We decided to introduce in the negotiation protocol an incentive to cooperate taking into account the passed negotiation time. This incentive is defined on the basis of a time dependent penalty, the discounting factor as in [18] or a time-dependent threshold. This penalty has to be used in the accept/reject stage of our consensus procedure. In fact, in the case of a discounting factor, each party will accept or reject an offer by evaluating the proposal using its utility function deducted from the discounting factor. In the case of a time-dependent threshold, if the evaluation is greater or equal to this threshold, the offer is accepted, otherwise, in the next period, its threshold is reduced. The use of a penalty is not enough alone since it does not help in finding a solution. Some information about the assessments of the parties involved in the negotiation are needed. In particular, it would be helpful to know why an offer has been rejected and/or what can be done to make a proposal that would be accepted. MYRIAD provides an analysis that determines the flaws an option, here a proposal. In particular, it gives this type of information: which criteria of a proposal should be improved so as to reach the highest possible overall evaluation [11]. As we use this tool to model the parties involved in the negotiation, the information about the criteria to improve can be used by the mediator to elaborate the proposals. We also consider that the dual function can be used to take into account another type of information: on which criteria of a proposal, no improvement is necessary so that the overall evaluation of a proposal is still acceptable, do not decrease. Thus, all information is a constraint to be satisfied as much as possible by Figure 1: An illustration of some system. the parties to make a new proposal. We are in a cooperative context and revealing one's opinion on what can be improved is not prohibited, on the contrary, it is useful and recommended here seeing that it helps in converging on an agreement. Therefore, when one of the parties refuses an offer, some information will be communicated. In order to facilitate and speed up the negotiation, we introduce a mediator. This specific entity is in charge of making the proposals to the other parties in the system by taking into account their public constraints (e.g. their availabilities) and the recommendations they make. This mediator can also be considered as the representative of the general interest we can have, in some applications, such as in the crisis management problem, the physician will be the mediator and will also have some more information to consider when making an offer (e.g. traffic state, transport mode and time). Each party in a negotiation N, a negotiator, can also be a mediator of another negotiation N', this party becoming the representative of N' in the negotiation N, as illustrated by fig. 1 what can also help in reducing the communication time. 6. AGENTIFICATION How the problem is transposed in a MAS problem is a very important aspect when designing such a system. The agentification has an influence upon the systems efficiency in solving the problem. Therefore, in this section, we describe the elements and constraints taken into account during the modelling phase and for the model itself. However, for this negotiation application, the modelling is quite natural when one observes the negotiation protocol motivations and main properties. First of all, it seems obvious that there should be one agent for each player of our multilateral multi-issue negotiation protocol. The agents have the involved parties' information and preferences. These agents are: • Autonomous: they decide for themselves what, when and under what conditions actions should be performed; • Rational: they have a means-ends competence to fit its decisions, according to its knowledge, preferences and goal; • Self-interested: they have their own interests which may conflict with the interests of other agents. Moreover, their preferences are modelled and a proposal evaluated and analysed using MYRIAD. Each agent has private information and can access public information as knowledge. In fact, there are two types of agents: the mediator type for the agents corresponding to the mediator of our negotiation protocol, the delegated physician in our application, and the negotiator type for the agents corresponding to the other parties, the hospitals. The main behaviours that an agent of type mediator needs to negotiate in our protocol are the following: • convert-improvements: converts the information given by the other agents involved in the negotiation about the improvements to be done, into constraints on the next proposal to be made; • convert-no-decrease: converts the information given by the other agents involved in the negotiation about the points that should not be changed into constraints on the next proposal to be made; • construct-proposal: constructs a new proposal according to the constraints obtained with convert-improvements, convert-no-decrease and the agent preferences; The main behaviours that an agent of type negotiator needs to negotiate in our protocol are the following: • convert-proposal: converts a proposal to a MYRIAD option of the agent according to its preferences model and its private data; • convert-improvements-wc: converts the agent recommendations for the improvements of a MYRIAD option into general information on the proposal; • convert-no-decrease-wc: converts the agent recom mendations about the criteria that should not be changed in the MYRIAD option into general information on the proposal; In addition to these behaviours, there are, for the two types of agents, access behaviours to MYRIAD functionalities such as the evaluation and improvement functions: • evaluate-option: evaluates the MYRIAD option obtained using the agent behaviour convert-proposal; • improvements: gets the agent recommendations to improve a proposal from the MYRIAD option; • no-decrease: gets the agent recommendations to not change some criteria from the MYRIAD option; Of course, before running the system with such agents, we must have defined each party preferences model in MYRIAD. This model has to be part of the agent so that it could be used to make the assessments and to retrieve the improvements. In addition to these behaviours, the communication acts between the agents is as follows. 1. mediator agent communication acts: 946 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 2: The protocol diagram in AUML, and where m is the number of negotiator agents and l is the number of agents refusing current proposal. (a) propose: sends a message containing a proposal to all negotiator agents; (b) inform: sends a message to all negotiator agents to inform them that an agreement has been reached and containing the consensus outcome. 2. negotiator agent communication acts: (a) accept-proposal: sends a message to the mediator agent containing the agent recommendations to improve the proposal and obtained with convert_improvements_wc; (b) reject-proposal: sends a message to the mediator agent containing the agent recommendations about the criteria that should not be changed and obtained with convert_no_decrease_wc. Such agents are interchangeable, in a case of failure, since they all have the same properties and represent a user with his preference model, not depending on the agent, but on the model defined in MYRIAD. When the issues and the decision criteria are different from each other, the information about the criteria improvement have to be pre-processed to give some instructions on the directions to take and about the negotiated issues. It is the same for the evaluation of a proposal: each agent has to convert the information about the issues to update its private information and to obtain the values of each attribute of the decision criteria. 7. OUR PROTOCOL Formally, we consider negotiations where a set of players A = {1, 2,..., m} and a player a are negotiating over a set Q of size q. The player a is the protocol mediator, the mediator agent of the agentification. The utility/preference function of a player k ∈ A ∪ {a} is Uk, defined using MYRIAD, as presented in Section 4, with a set Nk of criteria, Xk an option, and so on. An offer is a vector P = (P1, P2, · · ·, Pm), a partition of Q, in which Pk is player k's share of Q. We have P ∈ P where P is the set of admissible proposals, a finite set. Note that P is determined using all players general constraints on the proposals and Q. Moreover, let P˜ denote a particular proposal defined as a's preferred proposal. We also have the following notation: δk is the threshold decrease factor of player k, 4Dk: Pk → Xk is player k's function to convert a proposal to an option and Ψk is the function indicating which points P has to be improved, with Ψk its dual function--on which points no improvement is necessary. Ψk is obtained using the dual function of ωC (Hk, x): In period t of our consensus procedure, player a proposes an agreement P. All players k ∈ A respond to a by accepting or rejecting P. The responses are made simultaneously. If all players k ∈ A accept the offer, the game ends. If any player k rejects P, then the next period t +1 begins: player a makes another proposal P' by taking into account information provided by the players and the ones that have rejected P apply a penalty. Therefore, our negotiation protocol can be as follows: • At the beginning, we set period t = 0 • a makes a proposal P ∈ P that has not been proposed before. • Wait that all players of A give their opinion Yes or No to the player a. If all players agree on P, this later is chosen. Otherwise t is incremented and we go back to previous point. • If there is no more offer left from P, the default offer P˜ will be chosen. • The utility of players regarding a given of fer decreases over time. More precisely, the utility of player k ∈ A at period t regarding offer P is Uk (4Dk (Pk), t) = ft (Uk (4Dk (Pk))), where one can take for instance ft (x) = x. (δk) t or ft (x) = x − δk.t, as penalty function. Proof: Protocol P1 is first transformed in a game in extensive form. To this end, one shall specify the order in which the responders A react to the offer P of a. However the order in which the players answer has no influence on the course of the game and in particular on their personal utility. Hence protocol P1 is strictly equivalent to a game in 1A subgame perfect equilibrium is an equilibrium such that players' strategies constitute a Nash equilibrium in every subgame of the original game [18, 16]. A Nash equilibrium is a set of strategies, one for each player, such that no player has incentive to unilaterally change his/her action [15]. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 947 extensive form, considering any order of the players A. This game is clearly finite since P is finite and each offer can only be proposed once. Finally P1 corresponds to a game with perfect information. We end the proof by using a classical result stating that any finite game in extensive form with perfect information has at least one subgame perfect equilibrium (see e.g. [16]). Rational players (in the sense of von Neumann and Morgenstern) involved in protocol P1 will necessarily come up with a subgame perfect equilibrium. It is easy to see that there is one single subgame perfect equilibrium for protocol P1 corresponding to these values. This equilibrium consists of the following choices: first a proposes P3; player 1 rejects this offer; a proposes then P2 and both players 1 and 2 accepts otherwise they are threatened to receive the worse offer P1 for them. Finally offer P2 is chosen. Option P1 is the best one for a but the two other players vetoed it. It is interesting to point out that, even though a prefers P2 to P3, offer P3 is first proposed and this make P2 being accepted. If a proposes P2 first, then the subgame perfect equilibrium in this situation is P3. To sum up, the worse preferred options have to be proposed first in order to get finally the best one. But this entails a waste of time. Analysing the previous example, one sees that the game outcome at the equilibrium is P2 that is not very attractive for player 2. Option P3 seems more balanced since no player judges it badly. It could be seen as a better solution as a consensus among the agents. In order to introduce this notion of balanceness in the protocol, we introduce a condition under which a player will be obliged to accept the proposal, reducing the autonomy of the agents but for increasing rationality and cooperation. More precisely if the utility of a player is larger than a given threshold then acceptance is required. The threshold decreases over time so that players have to make more and more concession. Therefore, the protocol becomes as follows. Protocol P2. • At the beginning we set period t = 0 • a makes a proposal P ∈ P that has not been proposed before. • Wait that all players of A give their opinion Yes or No to the player a. A player k must accept the offer if Uk (Φk (Pk)) ≥ ρk (t) where ρk (t) tends to zero when t grows. Moreover there exists T such that for all t ≥ T, ρk (t) = 0. If all players agree on P, this later is chosen. Otherwise t is incremented and we go back to previous point. • If there is no more offer left from P, the default offer P˜ will be chosen. One can show exactly as in Lemma 1 that protocol P2 has at least one subgame perfect equilibrium. We expect that protocol P2 provides a solution not to far from P", so it favours fairness among the players. Therefore, our cooperation-based multilateral multi-issue protocol is the following: Protocol P. • At the beginning we set period t = 0 • a makes a proposal P ∈ P that has not been proposed before, considering' Pk (Pt) and' Pk (Pt) for all players k ∈ A. • Wait that all players of A give their opinion (Yes,' Pk (Pt)) or (No,' Pk (Pt)) to the player a. A player k must accept the offer if Uk (Φk (Pk)) ≥ ρk (t) where ρk (t) tends to zero when t grows. Moreover there exists T such that for all t ≥ T, ρk (t) = 0. If all players agree on P, this later is chosen. Otherwise t is incremented and we go back to previous point. • If there is no more offer left from P, the default offer P˜ will be chosen. 8. EXPERIMENTS We developed a MAS using the widely used JADE agent platform [1]. This MAS is designed to be as general as possible (e.g. a general framework to specialise according to the application) and enable us to make some preliminary experiments. The experiments aim at verifying that our approach gives solutions as close as possible to the Maximin solution and in a small number of rounds and hopefully in a short time since our context is highly cooperative. We defined the two types of agents and their behaviours as introduced in section 6. The agents and their behaviours correspond to the main classes of our prototype, NegotiatorAgent and NegotiatorBehaviour for the negotiator agents, and MediatorAgent and MediatorBehaviour for the mediator agent. These classes extend JADE classes and integrate MYRIAD into the agents, reducing the amount of communications in the system. Some functionalities depending on the application have to be implemented according to the application by extending these classes. In particular, all conversion parts of the agents have to be specified according to the application since to convert a proposal into decision criteria, we need to know, first, this model and the correlations between the proposals and this model. First, to illustrate our protocol, we present a simple example of our dispatch problem. In this example, we have three hospitals, H1, H2 and H3. Each hospital can receive victims having a particular pathology in such a way that H1 can receive patients with the pathology burn, surgery or orthopedic, H2 can receive patients with the pathology surgery, orthopedic or cardiology and H3 can receive patients with the pathology burn or cardiology. All the hospitals have similar decision criteria reflecting their preferences on the level of congestion they can face for the overall hospital and the different services available, as briefly explained for hospital H1 hereafter. For hospital H1, the preference model, fig. 3, is composed of five criteria. These criteria correspond to the preferences on the pathologies the hospital can treat. In the case of 948 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: The H1 preference model in MYRIAD. the pathology burn, the corresponding criterion, also named burn as shown in fig. 3, represents the preferences of H1 according to the value of Cburn which is the current capacity of burn. Therefore, the utility function of this criterion represents a preference such that the more there are patients of this pathology in the hospital, the less the hospital may satisfy them, and this with an initial capacity. In addition to reflecting this kind of viewpoint, the aggregation function as defined in MYRIAD introduces a veto on the criteria burn, surgery, orthopedic and EReceipt, where EReceipt is the criterion for the preferences about the capacity to receive a number of patients at the same time. In this simplified example, the physician have no particular preferences on the dispatch and the mediator agent chooses a proposal randomly in a subset of the set of admissibility. This subset have to satisfy as much as possible the recommendations made by the hospitals. To solve this problem, for this example, we decided to solve a linear problem with the availability constraints and the recommendations as linear constraints on the dispatch values. The set of admissibility is then obtained by solving this linear problem by the use of Prolog. Moreover, only the recommendations on how to improve a proposal are taken into account. The problem to solve is then to dispatch to hospital H1, H2 and H3, the set of victims composed of 5 victims with the pathology burn, 10 with surgery, 3 with orthopedic and 7 with cardiology. The availabilities of the hospitals are as presented in the following table. We obtain a multiagent system with the mediator agent and three agents of type negotiator for the three hospital in the problem. The hospitals threshold are fixed approximatively to the level where an evaluation is considered as good. To start, the negotiator agents send their availabilities. The mediator agent makes a proposal chosen randomly in admissible set obtained with these availabilities as linear constraints. This proposal is the vector Po = [[H1, burn, 3], [H1, surgery, 8], [H1, orthopaedic, 0], [H2, surgery, 2], [H2, orthopaedic, 3], [H2, cardiology, 6], [H3, burn, 2], [H3, cardiology, 1]] and the mediator sends propose (Po) to H1, H2 and H3 for approval. Each negotiator agent evaluates this proposal and answers back by accepting or rejecting Po: • Agent H1 rejects this offer since its evaluation is very far from the threshold (0.29, a bad score) and gives a recommendation to improve burn and surgery by sending the message reject_proposal ([burn, surgery]); • Agent H2 accepts this offer by sending the message accept_proposal (), the proposal evaluation being good; • Agent H3 accepts Po by sending the message accept _ proposal (), the proposal evaluation being good. Just with the recommendations provided by agent H1, the mediator is able to make a new proposal by restricting the value of burn and surgery. The new proposal obtained is then Pi = [[H1, burn, 0], [H1, surgery, 8], [H1, orthopaedic, 1], [H2, surgery, 2], [H2, orthopaedic, 2], [H2, cardiology, 6], [H3, burn, 5], [H3, cardiology, 1]]. The mediator sends propose (Pi) the negotiator agents. H1, H2 and H3 answer back by sending the message accept_proposal (), Pi being evaluated with a high enough score to be acceptable, and also considered as a good proposal when using the explanation function of MYRIAD. An agreement is reached with Pi. Note that the evaluation of Pi by H3 has decreased in comparison with Po, but not enough to be rejected and that this solution is the Pareto one, P ∗. Other examples have been tested with the same settings: issues in IN, three negotiator agents and the same mediator agent, with no preference model but selecting randomly the proposal. We obtained solutions either equal or close to the Maximin solution, the distance from the standard deviation being less than 0.0829, the evaluations not far from the ones obtained with P ∗ and with less than seven proposals made. This shows us that we are able to solve this multi-issue multilateral negotiation problem in a simple and efficient way, with solutions close to the Pareto solution. 9. CONCLUSION AND FUTURE WORK This paper presents a new protocol to address multilateral multi-issue negotiation in a cooperative context. The first main contribution is that we take into account complex inter-dependencies between multiple issues with the use of a complex preference modelling. This contribution is reinforced by the use of multi-issue negotiation in a multilateral context. Our second contribution is the use of sharp recommendations in the protocol to help in accelerating the search of a consensus between the cooperative agents and in finding an optimal solution. We have also shown that the protocol has subgame perfect equilibria and these equilibria converge to the usual maximum solution. Moreover, we tested this protocol in a crisis management context where the negotiation aim is where to evacuate a whole set of injured people to predefined hospitals. We have already developed a first MAS, in particular integrating MYRIAD, to test this protocol in order to know more about its efficiency in terms of solution quality and quickness in finding a consensus. This prototype enabled us to solve some examples with our approach and the results we obtained are encouraging since we obtained quickly good agreements, close to the Pareto solution, in the light of the initial constraints of the problem: the availabilities. We still have to improve our MAS by taking into account The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 949 the two types of recommendations and by adding a preference model to the mediator of our system. Moreover, a comparative study has to be done in order to evaluate the performance of our framework against the existing ones and against some variations on the protocol. 10. ACKNOWLEDGEMENT This work is partly funded by the ICIS research project under the Dutch BSIK Program (BSIK 03024).
A Multilateral Multi-issue Negotiation Protocol ABSTRACT In this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context. We consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment. This information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents. In addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent. 1. INTRODUCTION Multi-issue negotiation protocols represent an important field of study since negotiation problems in the real world are often complex ones involving multiple issues. To date, most of previous work in this area ([2, 3, 19, 13]) dealt almost exclusively with simple negotiations involving independent issues. However, real-world negotiation problems involve complex dependencies between multiple issues. When one wants to buy a car, for example, the value of a given car is highly dependent on its price, consumption, comfort and so on. The addition of such interdependencies greatly complicates the agents utility functions and classical utility functions, such as the weighted sum, are not sufficient to model this kind of preferences. In [10, 9, 17, 14, 20], the authors consider inter-dependencies between issues, most often defined with boolean values, except for [9], while we can deal with continuous and discrete dependent issues thanks to the modelling power of the Choquet integral. In [17], the authors deal with bilateral negotiation while we are interested in a multilateral negotiation setting. Klein et al. [10] present an approach similar to ours, using a mediator too and information about the strength of the approval or rejection that an agent makes during the negotiation. In our protocol, we use more precise information to improve the proposals thanks to the multi-criteria methodology and tools used to model the preferences of our agents. Lin, in [14, 20], also presents a mediation service but using an evolutionary algorithm to reach optimal solutions and as explained in [4], players in the evolutionary models need to repeatedly interact with each other until the stable state is reached. As the population size increases, the time it takes for the population to stabilize also increases, resulting in excessive computation, communication, and time overheads that can become prohibitive, and for one-to-many and many-to-many negotiations, the overheads become higher as the number of players increases. In [9], the authors consider a non-linear utility function by using constraints on the domain of the issues and a mediation service to find a combination of bids maximizing the social welfare. Our preference model, a nonlinear utility function too, is more complex than [9] one since the Choquet integral takes into account the interactions and the importance of each decision criteria/issue, not only the dependencies between the values of the issues, to determine the utility. We also use an iterative protocol enabling us to find a solution even when no bid combination is possible. In this paper, we propose a negotiation protocol suited for multiple agents with complex preferences and taking into account, at the same time, multiple interdependent issues and recommendations made by the agents to improve a proposal. Moreover, the preferences of our agents are modelled using a multi-criteria methodology and tools enabling us to take into account information about the improvements that can be made to a proposal, in order to help in accelerating the search for a consensus between the agents. Therefore, we propose a negotiation protocol consisting of solving our decision problem using a MAS with a multi-criteria decision aiding modelling at the agent level and a cooperation-based multilateral multi-issue negotiation protocol. This protocol is studied under a non-cooperative approach and it is shown that it has subgame perfect equilibria, provided that agents behave rationally in the sense of von Neumann and Morgenstern. The approach proposed in this paper has been first introduced and presented in [8]. In this paper, we present our first experiments, with some noteworthy results, and a more complex multi-agent system with representatives to enable us to have a more robust system. In Section 2, we present our application, a crisis management problem. Section 3 deals with the general aspect of the proposed approach. The preference modelling is described in sect. 4, whereas the motivations of our protocol are considered in sect. 5 and the agent/multiagent modelling in sect. 6. Section 7 presents the formal modelling and properties of our protocol before presenting our first experiments in sect. 8. Finally, in Section 9, we conclude and present the future work. 2. CASE STUDY 3. PROPOSED APPROACH 4. THE PREFERENCE MODEL 944 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. PROTOCOL MOTIVATIONS 6. AGENTIFICATION 946 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 7. OUR PROTOCOL The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 947 8. EXPERIMENTS 948 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 949 10. ACKNOWLEDGEMENT This work is partly funded by the ICIS research project under the Dutch BSIK Program (BSIK 03024).
A Multilateral Multi-issue Negotiation Protocol ABSTRACT In this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context. We consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment. This information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents. In addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent. 1. INTRODUCTION Multi-issue negotiation protocols represent an important field of study since negotiation problems in the real world are often complex ones involving multiple issues. To date, most of previous work in this area ([2, 3, 19, 13]) dealt almost exclusively with simple negotiations involving independent issues. However, real-world negotiation problems involve complex dependencies between multiple issues. The addition of such interdependencies greatly complicates the agents utility functions and classical utility functions, such as the weighted sum, are not sufficient to model this kind of preferences. In [17], the authors deal with bilateral negotiation while we are interested in a multilateral negotiation setting. Klein et al. [10] present an approach similar to ours, using a mediator too and information about the strength of the approval or rejection that an agent makes during the negotiation. In our protocol, we use more precise information to improve the proposals thanks to the multi-criteria methodology and tools used to model the preferences of our agents. In [9], the authors consider a non-linear utility function by using constraints on the domain of the issues and a mediation service to find a combination of bids maximizing the social welfare. Our preference model, a nonlinear utility function too, is more complex than [9] one since the Choquet integral takes into account the interactions and the importance of each decision criteria/issue, not only the dependencies between the values of the issues, to determine the utility. We also use an iterative protocol enabling us to find a solution even when no bid combination is possible. In this paper, we propose a negotiation protocol suited for multiple agents with complex preferences and taking into account, at the same time, multiple interdependent issues and recommendations made by the agents to improve a proposal. Moreover, the preferences of our agents are modelled using a multi-criteria methodology and tools enabling us to take into account information about the improvements that can be made to a proposal, in order to help in accelerating the search for a consensus between the agents. Therefore, we propose a negotiation protocol consisting of solving our decision problem using a MAS with a multi-criteria decision aiding modelling at the agent level and a cooperation-based multilateral multi-issue negotiation protocol. This protocol is studied under a non-cooperative approach and it is shown that it has subgame perfect equilibria, provided that agents behave rationally in the sense of von Neumann and Morgenstern. The approach proposed in this paper has been first introduced and presented in [8]. In this paper, we present our first experiments, with some noteworthy results, and a more complex multi-agent system with representatives to enable us to have a more robust system. In Section 2, we present our application, a crisis management problem. Section 3 deals with the general aspect of the proposed approach. The preference modelling is described in sect. 4, whereas the motivations of our protocol are considered in sect. 5 and the agent/multiagent modelling in sect. 6. Section 7 presents the formal modelling and properties of our protocol before presenting our first experiments in sect. 8. Finally, in Section 9, we conclude and present the future work. 10. ACKNOWLEDGEMENT
I-48
Normative System Games
We develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model. In the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multi-agent system. A normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system. We specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.
[ "norm system game", "norm system", "game", "multipl goal of increas prioriti", "goal", "comput complex", "complex", "game theoret properti", "kripk structur", "comput tree logic", "logic", "ordin util", "nash implement", "social law", "multi-agent system", "desir object", "constraint", "decis make" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "U", "U", "M" ]
Normative System Games Thomas ◦ Agotnes Dept of Computer Engineering Bergen University College PB. 2030, N-5020 Bergen Norway tag@hib.no Wiebe van der Hoek Dept of Computer Science University of Liverpool Liverpool L69 7ZF UK wiebe@csc.liv.ac.uk Michael Wooldridge Dept of Computer Science University of Liverpool Liverpool L69 7ZF UK mjw@csc.liv.ac.uk ABSTRACT We develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model. In the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system. A normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system. We specify an agent``s goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems; I.2.4 [Knowledge representation formalisms and methods] General Terms Theory 1. INTRODUCTION Normative systems, or social laws, have proved to be an attractive approach to coordination in multi-agent systems [13, 14, 10, 15, 1]. Although the various approaches to normative systems proposed in the literature differ on technical details, they all share the same basic intuition that a normative system is a set of constraints on the behaviour of agents in the system; by imposing these constraints, it is hoped that some desirable objective will emerge. The idea of using social laws to coordinate multi-agent systems was proposed by Shoham and Tennenholtz [13, 14]; their approach was extended by van der Hoek et al. to include the idea of specifying a desirable global objective for a social law as a logical formula, with the idea being that the normative system would be regarded as successful if, after implementing it (i.e., after eliminating all forbidden actions), the objective formula was guaranteed to be satisfied in the system [15]. However, this model did not take into account the preferences of individual agents, and hence neglected to account for possible strategic behaviour by agents when deciding whether to comply with the normative system or not. This model of normative systems was further extended by attributing to each agent a single goal in [16]. However, this model was still too impoverished to capture the kinds of decision making that take place when an agent decides whether or not to comply with a social law. In reality, strategic considerations come into play: an agent takes into account not just whether the normative system would be beneficial for itself, but also whether other agents will rationally choose to participate. In this paper, we develop a model of normative systems in which agents are assumed to have multiple goals, of increasing priority. We specify an agent``s goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures [8]: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripkebased normative systems as games, in which agents must determine whether to comply with the normative system or not. We thus provide a very natural bridge between logical structures and languages and the techniques and concepts of game theory, which have proved to be very powerful for analysing social contract-style scenarios such as normative systems [3, 4]. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. 2. KRIPKE STRUCTURES AND CTL We use Kripke structures as our basic semantic model for multiagent systems [8]. A Kripke structure is essentially a directed graph, with the vertex set S corresponding to possible states of the system being modelled, and the relation R ⊆ S × S capturing the 881 978-81-904262-7-5 (RPS) c 2007 IFAAMAS possible transitions of the system; intuitively, these transitions are caused by agents in the system performing actions, although we do not include such actions in our semantic model (see, e.g., [13, 2, 15] for related models which include actions as first class citizens). We let S0 denote the set of possible initial states of the system. Our model is intended to correspond to the well-known interleaved concurrency model from the reactive systems literature: thus an arc corresponds to the execution of an atomic action by one of the processes in the system, which we call agents. It is important to note that, in contrast to such models as [2, 15], we are therefore here not modelling synchronous action. This assumption is not in fact essential for our analysis, but it greatly simplifies the presentation. However, we find it convenient to include within our model the agents that cause transitions. We therefore assume a set A of agents, and we label each transition in R with the agent that causes the transition via a function α : R → A. Finally, we use a vocabulary Φ = {p, q, ...} of Boolean variables to express the properties of individual states S: we use a function V : S → 2Φ to label each state with the Boolean variables true (or satisfied) in that state. Collecting these components together, an agent-labelled Kripke structure (over Φ) is a 6-tuple: K = S, S0 , R, A, α, V , where: • S is a finite, non-empty set of states, • S0 ⊆ S (S0 = ∅) is the set of initial states; • R ⊆ S × S is a total binary relation on S, which we refer to as the transition relation1 ; • A = {1, ... , n} is a set of agents; • α : R → A labels each transition in R with an agent; and • V : S → 2Φ labels each state with the set of propositional variables true in that state. In the interests of brevity, we shall hereafter refer to an agentlabelled Kripke structure simply as a Kripke structure. A path over a transition relation R is an infinite sequence of states π = s0, s1, ... which must satisfy the property that ∀u ∈ N: (su , su+1) ∈ R. If u ∈ N, then we denote by π[u] the component indexed by u in π (thus π[0] denotes the first element, π[1] the second, and so on). A path π such that π[0] = s is an s-path. Let ΠR(s) denote the set of s-paths over R; since it will usually be clear from context, we often omit reference to R, and simply write Π(s). We will sometimes refer to and think of an s-path as a possible computation, or system evolution, from s. EXAMPLE 1. Our running example is of a system with a single non-sharable resource, which is desired by two agents. Consider the Kripke structure depicted in Figure 1. We have two states, s and t, and two corresponding Boolean variables p1 and p2, which are 1 In the branching time temporal logic literature, a relation R ⊆ S × S is said to be total iff ∀s ∃s : (s, s ) ∈ R. Note that the term total relation is sometimes used to refer to relations R ⊆ S × S such that for every pair of elements s, s ∈ S we have either (s, s ) ∈ R or (s , s) ∈ R; we are not using the term in this way here. It is also worth noting that for some domains, other constraints may be more appropriate than simple totality. For example, one might consider the agent totality requirement, that in every state, every agent has at least one possible transition available: ∀s∀i ∈ A∃s : (s, s ) ∈ R and α(s, s ) = i. 2p t p 2 2 1 s 1 1 Figure 1: The resource control running example. mutually exclusive. Think of pi as meaning agent i has currently control over the resource. Each agent has two possible actions, when in possession of the resource: either give it away, or keep it. Obviously there are infinitely many different s-paths and t-paths. Let us say that our set of initial states S0 equals {s, t}, i.e., we don``t make any assumptions about who initially has control over the resource. 2.1 CTL We now define Computation Tree Logic (CTL), a branching time temporal logic intended for representing the properties of Kripke structures [8]. Note that since CTL is well known and widely documented in the literature, our presentation, though complete, will be somewhat terse. We will use CTL to express agents'' goals. The syntax of CTL is defined by the following grammar: ϕ ::= | p | ¬ϕ | ϕ ∨ ϕ | E fϕ | E(ϕ U ϕ) | A fϕ | A(ϕ U ϕ) where p ∈ Φ. We denote the set of CTL formula over Φ by LΦ; since Φ is understood, we usually omit reference to it. The semantics of CTL are given with respect to the satisfaction relation |=, which holds between pairs of the form K, s, (where K is a Kripke structure and s is a state in K), and formulae of the language. The satisfaction relation is defined as follows: K, s |= ; K, s |= p iff p ∈ V (s) (where p ∈ Φ); K, s |= ¬ϕ iff not K, s |= ϕ; K, s |= ϕ ∨ ψ iff K, s |= ϕ or K, s |= ψ; K, s |= A fϕ iff ∀π ∈ Π(s) : K, π[1] |= ϕ; K, s |= E fϕ iff ∃π ∈ Π(s) : K, π[1] |= ϕ; K, s |= A(ϕ U ψ) iff ∀π ∈ Π(s), ∃u ∈ N, s.t. K, π[u] |= ψ and ∀v, (0 ≤ v < u) : K, π[v] |= ϕ K, s |= E(ϕ U ψ) iff ∃π ∈ Π(s), ∃u ∈ N, s.t. K, π[u] |= ψ and ∀v, (0 ≤ v < u) : K, π[v] |= ϕ The remaining classical logic connectives (∧, →, ↔) are assumed to be defined as abbreviations in terms of ¬, ∨, in the conventional manner. The remaining CTL temporal operators are defined: A♦ϕ ≡ A( U ϕ) E♦ϕ ≡ E( U ϕ) A ϕ ≡ ¬E♦¬ϕ E ϕ ≡ ¬A♦¬ϕ We say ϕ is satisfiable if K, s |= ϕ for some Kripke structure K and state s in K; ϕ is valid if K, s |= ϕ for all Kripke structures K and states s in K. The problem of checking whether K, s |= ϕ for given K, s, ϕ (model checking) can be done in deterministic polynomial time, while checking whether a given ϕ is satisfiable or whether ϕ is valid is EXPTIME-complete [8]. We write K |= ϕ if K, s0 |= ϕ for all s0 ∈ S0 , and |= ϕ if K |= ϕ for all K. 882 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. NORMATIVE SYSTEMS For our purposes, a normative system is simply a set of constraints on the behaviour of agents in a system [1]. More precisely, a normative system defines, for every possible system transition, whether or not that transition is considered to be legal or not. Different normative systems may differ on whether or not a transition is legal. Formally, a normative system η (w.r.t. a Kripke structure K = S, S0 , R, A, α, V ) is simply a subset of R, such that R \ η is a total relation. The requirement that R\η is total is a reasonableness constraint: it prevents normative systems which lead to states with no successor. Let N (R) = {η : (η ⊆ R) & (R \ η is total)} be the set of normative systems over R. The intended interpretation of a normative system η is that (s, s ) ∈ η means transition (s, s ) is forbidden in the context of η; hence R \ η denotes the legal transitions of η. Since it is assumed η is reasonable, we are guaranteed that a legal outward transition exists for every state. We denote the empty normative system by η∅, so η∅ = ∅. Note that the empty normative system η∅ is reasonable with respect to any transition relation R. The effect of implementing a normative system on a Kripke structure is to eliminate from it all transitions that are forbidden according to this normative system (see [15, 1]). If K is a Kripke structure, and η is a normative system over K, then K † η denotes the Kripke structure obtained from K by deleting transitions forbidden in η. Formally, if K = S, S0 , R, A, α, V , and η ∈ N (R), then let K†η = K be the Kripke structure K = S , S0 , R , A , α , V where: • S = S , S0 = S0 , A = A , and V = V ; • R = R \ η; and • α is the restriction of α to R : α (s, s ) = j α(s, s ) if (s, s ) ∈ R undefined otherwise. Notice that for all K, we have K † η∅ = K. EXAMPLE 1. (continued) When thinking in terms of fairness, it seems natural to consider normative systems η that contain (s, s) or (t, t). A normative system with (s, t) would not be fair, in the sense that A♦A ¬p1 ∨ A♦A ¬p2 holds: in all paths, from some moment on, one agent will have control forever. Let us, for later reference, fix η1 = {(s, s)}, η2 = {(t, t)}, and η3 = {(s, s), (t, t)}. Later, we will address the issue of whether or not agents should rationally choose to comply with a particular normative system. In this context, it is useful to define operators on normative systems which correspond to groups of agents defecting from the normative system. Formally, let K = S, S0 ,R, A,α, V be a Kripke structure, let C ⊆ A be a set of agents over K, and let η be a normative system over K. Then: • η C denotes the normative system that is the same as η except that it only contains the arcs of η that correspond to the actions of agents in C. We call η C the restriction of η to C, and it is defined as: η C = {(s, s ) : (s, s ) ∈ η & α(s, s ) ∈ C}. Thus K † (η C) is the Kripke structure that results if only the agents in C choose to comply with the normative system. • η C denotes the normative system that is the same as η except that it only contains the arcs of η that do not correspond to actions of agents in C. We call η C the exclusion of C from η, and it is defined as: η C = {(s, s ) : (s, s ) ∈ η & α(s, s ) ∈ C}. Thus K † (η C) is the Kripke structure that results if only the agents in C choose not to comply with the normative system (i.e., the only ones who comply are those in A \ C). Note that we have η C = η (A\C) and η C = η (A\C). EXAMPLE 1. (Continued) We have η1 {1} = η1 = {(s, s)}, while η1 {1} = η∅ = η1 {2}. Similarly, we have η3 {1} = {(s, s)} and η3 {1} = {(t, t)}. 4. GOALS AND UTILITIES Next, we want to be able to capture the goals that agents have, as these will drive an agent``s strategic considerations - particularly, as we will see, considerations about whether or not to comply with a normative system. We will model an agent``s goals as a prioritised list of CTL formulae, representing increasingly desired properties that the agent wishes to hold. The intended interpretation of such a goal hierarchy γi for agent i ∈ A is that the further up the hierarchy a goal is, the more it is desired by i. Note that we assume that if an agent can achieve a goal at a particular level in its goal hierarchy, then it is unconcerned about goals lower down the hierarchy. Formally, a goal hierarchy, γ, (over a Kripke structure K) is a finite, non-empty sequence of CTL formulae γ = (ϕ0, ϕ1, ... , ϕk ) in which, by convention, ϕ0 = . We use a natural number indexing notation to extract the elements of a goal hierarchy, so if γ = (ϕ0, ϕ1, ... , ϕk ) then γ[0] = ϕ0, γ[1] = ϕ1, and so on. We denote the largest index of any element in γ by |γ|. A particular Kripke structure K is said to satisfy a goal at index x in goal hierarchy γ if K |= γ[x], i.e., if γ[x] is satisfied in all initial states S0 of K. An obvious potential property of goal hierarchies is monotonicity: where goals at higher levels in the hierarchy logically imply those at lower levels in the hierarchy. Formally, a goal hierarchy γ is monotonic if for all x ∈ {1, ... , |γ|} ⊆ N, we have |= γ[x] → γ[x − 1]. The simplest type of monotonic goal hierarchy is where γ[x + 1] = γ[x] ∧ ψx+1 for some ψx+1, so at each successive level of the hierarchy, we add new constraints to the goal of the previous level. Although this is a natural property of many goal hierarchies, it is not a property we demand of all goal hierarchies. EXAMPLE 1. (continued) Suppose the agents have similar, but opposing goals: each agent i wants to keep the source as often and long as possible for himself. Define each agent``s goal hierarchy as: γi = ( ϕi 0 = , ϕi 1 = E♦pi , ϕi 2 = E E♦pi , ϕi 3 = E♦E pi , ϕi 4 = A E♦pi , ϕi 5 = E♦A pi ϕi 6 = A A♦pi , ϕi 7 = A (A♦pi ∧ E pi ), ϕi 8 = A pi ) The most desired goal of agent i is to, in every computation, always have the resource, pi (this is expressed in ϕi 8). Thanks to our reasonableness constraint, this goal implies ϕi 7 which says that, no matter how the computation paths evolve, it will always be that all The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 883 continuations will hit a point in which pi , and, moreover, there is a continuation in which pi always holds. Goal ϕi 6 is a fairness constraint implied by it. Note that A♦pi says that every computation eventually reaches a pi state. This may mean that after pi has happened, it will never happen again. ϕi 6 circumvents this: it says that, no matter where you are, there should be a future pi state. The goal ϕi 5 is like the strong goal ϕi 8 but it accepts that this is only achieved in some computation, eventually. ϕi 4 requires that in every path, there is always a continuation that eventually gives pi . Goal ϕi 3 says that pi should be true on some branch, from some moment on. It implies ϕi 2 which expresses that there is a computation such that everywhere during it, it is possible to choose a continuation that eventually satisfies pi . This implies ϕi 1, which says that pi should at least not be impossible. If we even drop that demand, we have the trivial goal ϕi 0. We remark that it may seem more natural to express a fairness constraint ϕi 6 as A ♦pi . However, this is not a proper CTL formula. It is in fact a formula in CTL ∗ [9], and in this logic, the two expressions would be equivalent. However, our basic complexity results in the next sections would not hold for the richer language CTL ∗2 , and the price to pay for this is that we have to formulate our desired goals in a somewhat more cumbersome manner than we might ideally like. Of course, our basic framework does not demand that goals are expressed in CTL; they could equally well be expressed in CTL ∗ or indeed ATL [2] (as in [15]). We comment on the implications of alternative goal representations at the conclusion of the next section. A multi-agent system collects together a Kripke structure (representing the basic properties of a system under consideration: its state space, and the possible state transitions that may occur in it), together with a goal hierarchy, one for each agent, representing the aspirations of the agents in the system. Formally, a multi-agent system, M , is an (n + 1)-tuple: M = K, γ1, ... , γn where K is a Kripke structure, and for each agent i in K, γi is a goal hierarchy over K. 4.1 The Utility of Normative Systems We can now define the utility of a Kripke structure for an agent. The idea is that the utility of a Kripke structure is the highest index of any goal that is guaranteed for that agent in the Kripke structure. We make this precise in the function ui (·): ui (K) = max{j : 0 ≤ j ≤ |γi | & K |= γi [j ]} Note that using these definitions of goals and utility, it never makes sense to have a goal ϕ at index n if there is a logically weaker goal ψ at index n + k in the hierarchy: by definition of utility, it could never be n for any structure K. EXAMPLE 1. (continued) Let M = K, γ1, γ2 be the multiagent system of Figure 1, with γ1 and γ2 as defined earlier in this example. Recall that we have defined S0 as {s, t}. Then, u1(K) = u2(K) = 4: goal ϕ4 is true in S0 , but ϕ5 is not. To see that ϕ2 4 = A E♦p2 is true in s for instance: note that on ever path it is always the case that there is a transition to t, in which p2 is true. Notice that since for any goal hierarchy γi we have γ[0] = , then for all Kripke structures, ui (K) is well defined, with ui (K) ≥ 2 CTL ∗ model checking is PSPACE-complete, and hence much worse (under standard complexity theoretic assumptions) than model checking CTL [8]. η δ1(K, η) δ2(K, η) η∅ 0 0 η1 0 3 η2 3 0 η3 2 2 C D C (2, 2) (0, 3) D (3, 0) (0, 0) Figure 2: Benefits of implementing a normative system η (left) and pay-offs for the game ΣM . 0. Note that this is an ordinal utility measure: it tells us, for any given agent, the relative utility of different Kripke structures, but utility values are not on some standard system-wide scale. The fact that ui (K1) > ui (K2) certainly means that i strictly prefers K1 over K2, but the fact that ui (K) > uj (K) does not mean that i values K more highly than j . Thus, it does not make sense to compare utility values between agents, and so for example, some system wide measures of utility, (notably those measures that aggregate individual utilities, such as social welfare), do not make sense when applied in this setting. However, as we shall see shortly, other measures - such as Pareto efficiency - can be usefully applied. There are other representations for goals, which would allow us to define cardinal utilities. The simplest would be to specify goals γ for an agent as a finite, non-empty, one-to-one relation: γ ⊆ L×R. We assume that the x values in pairs (ϕ, x) ∈ γ are specified so that x for agent i means the same as x for agent j , and so we have cardinal utility. We then define the utility for i of a Kripke structure K asui (K) = max{x : (ϕ, x) ∈ γi & K |= ϕ}. The results of this paper in fact hold irrespective of which of these representations we actually choose; we fix upon the goal hierarchy approach in the interests of simplicity. Our next step is to show how, in much the same way, we can lift the utility function from Kripke structures to normative systems. Suppose we are given a multi-agent system M = K, γ1, ... , γn and an associated normative system η over K. Let for agent i, δi (K, K ) be the difference in his utility when moving from K to K : δi (K, K ) = ui (K )− ui (K). Then the utility of η to agent i wrt K is δi (K, K † η). We will sometimes abuse notation and just write δi (K, η) for this, and refer to it as the benefit for agent i of implementing η in K. Note that this benefit can be negative. Summarising, the utility of a normative system to an agent is the difference between the utility of the Kripke structure in which the normative system was implemented and the original Kripke structure. If this value is greater than 0, then the agent would be better off if the normative system were imposed, while if it is less than 0 then the agent would be worse off if η were imposed than in the original system. We say η is individually rational for i wrt K if δi (K, η) > 0, and individually rational simpliciter if η is individually rational for every agent. A social system now is a pair Σ = M , η where M is a multi-agent system, and η is a normative system over M . EXAMPLE 1. The table at the left hand in Figure 2 displays the utilities δi (K, η) of implementing η in the Kripke structure of our running example, for the normative systems η = η∅, η1, η2 and η3, introduced before. Recall that u1(K) = u2(K) = 4. 4.2 Universal and Existential Goals 884 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Keeping in mind that a norm η restricts the possible transitions of the model under consideration, we make the following observation, borrowing from [15]. Some classes of goals are monotonic or anti-monotonic with respect to adding additional constraints to a system. Let us therefore define two fragments of the language of CTL: the universal language Lu with typical element μ, and the existential fragment Le with typical element ε. μ ::= | p | ¬p | μ ∨ μ | A fμ | A μ | A(μ U μ) ε ::= | p | ¬p | ε ∨ ε | E fε | E♦ε | E(ε U ε) Let us say, for two Kripke structures K1 = S, S0 , R1, A, α, V and K2 = S, S0 , R2, A, α, V that K1 is a subsystem of K2 and K2 is a supersystem of K1, written K1 K2 iff R1 ⊆ R2. Note that typically K † η K. Then we have (cf. [15]). THEOREM 1. Suppose K1 K2, and s ∈ S. Then ∀ε ∈ Le : K1, s |= ε ⇒ K2, s |= ε ∀μ ∈ Lu : K2, s |= μ ⇒ K1, s |= μ This has the following effect on imposing a new norm: COROLLARY 1. Let K be a structure, and η a normative system. Let γi denote a goal hierarchy for agent i. 1. Suppose agent i``s utility ui (K) is n, and γi [n] ∈ Lu , (i.e., γi [n] is a universal formula). Then, for any normative system η, δi (K, η) ≥ 0. 2. Suppose agent i``s utility ui (K † η) is n, and γi [n] is an existential formula ε. Then, δi (K † η, K) ≥ 0. Corollary 1``s first item says that an agent whose current maximal goal in a system is a universal formula, need never fear the imposition of a new norm η. The reason is that his current goal will at least remain true (in fact a goal higher up in the hierarchy may become true). It follows from this that an agent with only universal goals can only gain from the imposition of normative systems η. The opposite is true for existential goals, according to the second item of the corollary: it can never be bad for an agent to undo a norm η. Hence, an agent with only existential goals might well fear any norm η. However, these observations implicitly assume that all agents in the system will comply with the norm. Whether they will in fact do so, of course, is a strategic decision: it partly depends on what the agent thinks that other agents will do. This motivates us to consider normative system games. 5. NORMATIVE SYSTEM GAMES We now have a principled way of talking about the utility of normative systems for agents, and so we can start to apply the technical apparatus of game theory to analyse them. Suppose we have a multi-agent system M = K, γ1, ... , γn and a normative system η over K. It is proposed to the agents in M that η should be imposed on K, (typically to achieve some coordination objective). Our agent - let``s say agent i - is then faced with a choice: should it comply with the strictures of the normative system, or not? Note that this reasoning takes place before the agent is in the system - it is a design time consideration. We can understand the reasoning here as a game, as follows. A game in strategic normal form (cf. [11, p.11]) is a structure: G = AG, S1, ... , Sn , U1, ... , Un where: • AG = {1, ... , n} is a set of agents - the players of the game; • Si is the set of strategies for each agent i ∈ AG (a strategy for an agent i is nothing else than a choice between alternative actions); and • Ui : (S1 × · · · × Sn ) → R is the utility function for agent i ∈ AG, which assigns a utility to every combination of strategy choices for the agents. Now, suppose we are given a social system Σ = M , η where M = K, γ1, ... , γn . Then we can associate a game - the normative system game - GΣ with Σ, as follows. The agents AG in GΣ are as in Σ. Each agent i has just two strategies available to it: • C - comply (cooperate) with the normative system; and • D - do not comply with (defect from) the normative system. If S is a tuple of strategies, one for each agent, and x ∈ {C, D}, then we denote by AGx S the subset of agents that play strategy x in S. Hence, for a social system Σ = M , η , the normative system η AGC S only implements the restrictions for those agents that choose to cooperate in GΣ. Note that this is the same as η AGD S : the normative system that excludes all the restrictions of agents that play D in GΣ. We then define the utility functions Ui for each i ∈ AG as: Ui (S) = δi (K, η AGC S ). So, for example, if SD is a collection of strategies in which every agent defects (i.e., does not comply with the norm), then Ui (SD ) = δi (K, (η AGD SD )) = ui (K † η∅) − ui (K) = 0. In the same way, if SC is a collection of strategies in which every agent cooperates (i.e., complies with the norm), then Ui (SC ) = δi (K, (η AGD SC )) = ui (K † (η ∅)) = ui (K † η). We can now start to investigate some properties of normative system games. EXAMPLE 1. (continued) For our example system, we have displayed the different U values for our multi agent system with the norm η3, i.e., {(s, s), (t, t)} as the second table of Figure 2. For instance, the pair (0, 3) in the matrix under the entry S = C, D is obtained as follows. U1( C, D ) = δ1(K, η3 AGC C,D ) = u1(K † η3 AGC C,D ) − u1(K). The first term of this is the utility of 1 in the system K where we implement η3 for the cooperating agent, i.e., 1, only. This means that the transitions are R \ {(s, s)}. In this system, still ϕ1 4 = A E♦p1 is the highest goal for agent 1. This is the same utility for 1 as in K, and hence, δ1(K, η3 AGC C,D ) = 0. Agent 2 of course benefits if agent 1 complies with η3 while 2 does not. His utility would be 3, since η3 AGC C,D is in fact η1. 5.1 Individually Rational Normative Systems A normative system is individually rational if every agent would fare better if the normative system were imposed than otherwise. This is a necessary, although not sufficient condition on a norm to expect that everybody respects it. Note that η3 of our example is individually rational for both 1 and 2, although this is not a stable situation: given that the other plays C, i is better of by playing D. We can easily characterise individually rationality with respect to the corresponding game in strategic form, as follows. Let Σ = M , η be a social system. Then the following are equivalent: The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 885 f(xk) ... s0 s1 s2 s3 s4 s(2k−1) s2k t(x1) f(x1) t(x2) f(x2) t(xk) Figure 3: The Kripke structure produced in the reduction of Theorem 2; all transitions are associated with agent 1, the only initial state is s0. 1. η is individually rational in M ; 2. ∀i ∈ AG, Ui (SC ) > Ui (SD ) in the game GΣ. The decision problem associated with individually rational normative systems is as follows: INDIVIDUALLY RATIONAL NORMATIVE SYSTEM (IRNS): Given: Multi-agent system M . Question: Does there exist an individually rational normative system for M ? THEOREM 2. IRNS is NP-complete, even in one-agent systems. PROOF. For membership of NP, guess a normative system η, and verify that it is individually rational. Since η ⊆ R, we will be able to guess it in nondeterministic polynomial time. To verify that it is individually rational, we check that for all i, we have ui (K † η) > ui (K); computing K † η is just set subtraction, so can be done in polynomial time, while determining the value of ui (K) for any K can be done with a polynomial number of model checking calls, each of which requires only time polynomial in the K and γ. Hence verifying that ui (K † η) > ui (K) requires only polynomial time. For NP-hardness, we reduce SAT [12, p.77]. Given a SAT instance ϕ over Boolean variables x1, ... , xk , we produce an instance of IRNS as follows. First, we define a single agent A = {1}. For each Boolean variable xi in the SAT instance, we create two Boolean variables t(xi ) and f (xi ) in the IRNS instance. We then create a Kripke structure Kϕ with 2k + 1 states, as shown in Figure 3: arcs in this graph correspond to transitions in Kϕ. Let ϕ∗ be the result of systematically substituting for every Boolean variable xi in ϕ the CTL expression (E ft(xi )). Next, consider the following formulae: k^ i=1 E f(t(xi ) ∨ f (xi )) (1) k^ i=1 ¬((E ft(xi )) ∧ (E ff (xi ))) (2) We then define the goal hierarchy for all agent 1 as follows: γ1[0] = γ1[1] = (1) ∧ (2) ∧ ϕ∗ We claim there is an individually rational normative system for the instance so constructed iff ϕ is satisfiable. First, notice that any individually rational normative system must force γ1[1] to be true, since in the original system, we do not have γ1[1]. For the ⇒ direction, if there is an individually rational normative system η, then we construct a satisfying assignment for ϕ by considering the arcs that are forbidden by η: formula (1) ensures that we must forbid an arc to either a t(xi ) or a f (xi ) state for all variables xi , but (2) ensures that we cannot forbid arcs to both. So, if we forbid an arc to a t(xi ) state then in the corresponding valuation for ϕ we make xi false, while if we forbid an arc to a f (xi ) state then we make xi true. The fact that ϕ∗ is part of the goal ensures that the normative system is indeed a valuation for ϕ. For ⇐, note that for any satisfying valuation for ϕ we can construct an individually rational normative system η, as follows: if the valuation makes xi true, we forbid the arc to the f (xi ) state, while if the valuation makes xi false, we forbid the arc to the t(xi ) state. The resulting normative system ensures γ1[1], and is thus individually rational. Notice that the Kripke structure constructed in the reduction contains just a single agent, and so the Theorem is proven. 5.2 Pareto Efficient Normative Systems Pareto efficiency is a basic measure of how good a particular outcome is for a group of agents [11, p.7]. Intuitively, an outcome is Pareto efficient if there is no other outcome that makes every agent better off. In our framework, suppose we are given a social system Σ = M , η , and asked whether η is Pareto efficient. This amounts to asking whether or not there is some other normative system η such that every agent would be better off under η than with η. If η makes every agent better off than η, then we say η Pareto dominates η. The decision problem is as follows: PARETO EFFICIENT NORMATIVE SYSTEM (PENS): Given: Multi-agent system M and normative system η over M . Question: Is η Pareto efficient for M ? THEOREM 3. PENS is co-NP-complete, even for one-agent systems. PROOF. Let M and η be as in the Theorem. We show that the complement problem to PENS, which we refer to as PARETO DOMINATED, is NP-complete. In this problem, we are given M and η, and we are asked whether η is Pareto dominated, i.e., whether or not there exists some η over M such that η makes every agent better off than η. For membership of NP, simply guess a normative system η , and verify that for all i ∈ A, we have ui (K † η ) > ui (K † η) - verifying requires a polynomial number of model checking problems, each of which takes polynomial time. Since η ⊆ R, the normative system can be guessed in non-deterministic polynomial time. For NP-hardness, we reduce IRNS, which we know to be NPcomplete from Theorem 2. Given an instance M of IRNS, we let M in the instance of PARETO DOMINATED be as in the IRNS instance, and define the normative system for PARETO DOMINATED to be η∅, the empty normative system. Now, it is straightforward that there exists a normative system η which Pareto dominates η∅ in M iff there exist an individually rational normative system in M . Since the complement problem is NP-complete, it follows that PENS is co-NP-complete. 886 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) η0 η1 η2 η3 η4 η5 η6 η7 η8 u1(K † η) 4 4 7 6 5 0 0 8 0 u2(K † η) 4 7 4 6 0 5 8 0 0 Table 1: Utilities for all possible norms in our example How about Pareto efficient norms for our toy example? Settling this question amounts to finding the dominant normative systems among η0 = η∅, η1, η2, η3 defined before, and η4 = {(s, t)}, η5 = {(t, s)}, η6 = {(s, s), (t, s)}, η7 = {(t, t), (s, t)} and η8 = {(s, t), (t, s)}. The utilities for each system are given in Table 1. From this, we infer that the Pareto efficient norms are η1, η2, η3, η6 and η7. Note that η8 prohibits the resource to be passed from one agent to another, and this is not good for any agent (since we have chosen S0 = {s, t}, no agent can be sure to ever get the resource, i.e., goal ϕi 1 is not true in K † η8). 5.3 Nash Implementation Normative Systems The most famous solution concept in game theory is of course Nash equilibrium [11, p.14]. A collection of strategies, one for each agent, is said to form a Nash equilibrium if no agent can benefit by doing anything other than playing its strategy, under the assumption that the other agents play theirs. Nash equilibria are important because they provide stable solutions to the problem of what strategy an agent should play. Note that in our toy example, although η3 is individually rational for each agent, it is not a Nash equilibrium, since given this norm, it would be beneficial for agent 1 to deviate (and likewise for 2). In our framework, we say a social system Σ = M , η (where η = η∅) is a Nash implementation if SC (i.e., everyone complying with the normative system) forms a Nash equilibrium in the game GΣ. The intuition is that if Σ is a Nash implementation, then complying with the normative system is a reasonable solution for all concerned: there can be no benefit to deviating from it, indeed, there is a positive incentive for all to comply. If Σ is not a Nash implementation, then the normative system is unlikely to succeed, since compliance is not rational for some agents. (Our choice of terminology is deliberately chosen to reflect the way the term Nash implementation is used in implementation theory, or mechanism design [11, p.185], where a game designer seeks to achieve some outcomes by designing the rules of the game such that these outcomes are equilibria.) NASH IMPLEMENTATION (NI) : Given: Multi-agent system M . Question: Does there exist a non-empty normative system η over M such that M , η forms a Nash implementation? Verifying that a particular social system forms a Nash implementation can be done in polynomial time - it amounts to checking: ∀i ∈ A : ui (K † η) ≥ ui (K † (η {i})). This, clearly requires only a polynomial number of model checking calls, each of which requires only polynomial time. THEOREM 4. The NI problem is NP-complete, even for twoagent systems. PROOF. For membership of NP, simply guess a normative system η and check that it forms a Nash implementation; since η ⊆ R, guessing can be done in non-deterministic polynomial time, and as s(2k+1) 1 1 1 1 1 1 11 1 1 11 2 2 2 2 2 2 2 2 2 2 2 t(x1) f(x1) t(x2) f(x2) t(xk) f(xk) 2 2 t(x1) f(x1) t(x2) f(x2) t(xk) f(xk) .... s0 Figure 4: Reduction for Theorem 4. we argued above, verifying that it forms a Nash implementation can be done in polynomial time. For NP-hardness, we reduce SAT. Suppose we are given a SAT instance ϕ over Boolean variables x1, ... , xk . Then we construct an instance of NI as follows. We create two agents, A = {1, 2}. For each Boolean variable xi we create two Boolean variables, t(xi ) and f (xi ), and we then define a Kripke structure as shown in Figure 4, with s0 being the only initial state; the arc labelling in Figure 4 gives the α function, and each state is labelled with the propositions that are true in that state. For each Boolean variable xi , we define the formulae xi and x⊥ i as follows: xi = E f(t(xi ) ∧ E f((E f(t(xi ))) ∧ A f(¬f (xi )))) x⊥ i = E f(f (xi ) ∧ E f((E f(f (xi ))) ∧ A f(¬t(xi )))) Let ϕ∗ be the formula obtained from ϕ by systematically substituting xi for xi . Each agent has three goals: γi [0] = for both i ∈ {1, 2}, while γ1[1] = k^ i=1 ((E f(t(xi ))) ∧ (E f(f (xi )))) γ2[1] = E fE f k^ i=1 ((E f(t(xi ))) ∧ (E f(f (xi )))) and finally, for both agents, γi [2] being the conjunction of the following formulae: The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 887 k^ i=1 (xi ∨ x⊥ i ) (3) k^ i=1 ¬(xi ∧ x⊥ i ) (4) k^ i=1 ¬(E f(t(xi )) ∧ E f(f (xi ))) (5) ϕ∗ (6) We denote the multi-agent system so constructed by Mϕ. Now, we prove that the SAT instance ϕ is satisfiable iff Mϕ has a Nash implementation normative system: For the ⇒ direction, suppose ϕ is satisfiable, and let X be a satisfying valuation, i.e., a set of Boolean variables making ϕ true. We can extract from X a Nash implementation normative system η as follows: if xi ∈ X , then η includes the arc from s0 to the state in which f (xi ) is true, and also includes the arc from s(2k + 1) to the state in which f (xi ) is true; if xi ∈ X , then η includes the arc from s0 to the state in which t(xi ) is true, and also includes the arc from s(2k + 1) to the state in which t(xi ) is true. No other arcs, apart from those so defined, as included in η. Notice that η is individually rational for both agents: if they both comply with the normative system, then they will have their γi [2] goals achieved, which they do not in the basic system. To see that η forms a Nash implementation, observe that if either agent defects from η, then neither will have their γi [2] goals achieved: agent 1 strictly prefers (C, C) over (D, C), and agent 2 strictly prefers (C, C) over (C, D). For the ⇐ direction, suppose there exists a Nash implementation normative system η, in which case η = ∅. Then ϕ is satisfiable; for suppose not. Then the goals γi [2] are not achievable by any normative system, (by construction). Now, since η must forbid at least one transition, then at least one agent would fail to have its γi [1] goal achieved if it complied, so at least one would do better by defecting, i.e., not complying with η. But this contradicts the assumption that η is a Nash implementation, i.e., that (C, C) forms a Nash equilibrium. This result is perhaps of some technical interest beyond the specific concerns of the present paper, since it is related to two problems that are of wider interest: the complexity of mechanism design [5], and the complexity of computing Nash equilibria [6, 7] 5.4 Richer Goal Languages It is interesting to consider what happens to the complexity of the problems we consider above if we allow richer languages for goals: in particular, CTL ∗ [9]. The main difference is that determining ui (K) in a given multi-agent system M when such a goal language is used involves solving a PSPACE-complete problem (since model checking for CTL ∗ is PSPACE-complete [8]). In fact, it seems that for each of the three problems we consider above, the corresponding problem under the assumption of a CTL ∗ representation for goals is also PSPACE-complete. It cannot be any easier, since determining the utility of a particular Kripke structure involves solving a PSPACE-complete problem. To see membership in PSPACE we can exploit the fact that PSPACE = NPSPACE [12, p.150], and so we can guess the desired normative system, applying a PSPACE verification procedure to check that it has the desired properties. 6. CONCLUSIONS Social norms are supposed to restrict our behaviour. Of course, such a restriction does not have to be bad: the fact that an agent``s behaviour is restricted may seem a limitation, but there may be benefits if he can assume that others will also constrain their behaviour. The question then, for an agent is, how to be sure that others will comply with a norm. And, for a system designer, how to be sure that the system will behave socially, that is, according to its norm. Game theory is a very natural tool to analyse and answer these questions, which involve strategic considerations, and we have proposed a way to translate key questions concerning logic-based normative systems to game theoretical questions. We have proposed a logical framework to reason about such scenarios, and we have given some computational costs for settling some of the main questions about them. Of course, our approach is in many senses open for extension or enrichment. An obvious issue is to consider is the complexity of the questions we give for more practical representations of models (cf. [1]), and to consider other classes of allowable goals. 7. REFERENCES [1] T. Agotnes, W. van der Hoek, J. A. Rodriguez-Aguilar, C. Sierra, and M. Wooldridge. On the logic of normative systems. In Proc. IJCAI-07, Hyderabad, India, 2007. [2] R. Alur, T. A. Henzinger, and O. Kupferman. Alternating-time temporal logic. Jnl. of the ACM, 49(5):672-713, 2002. [3] K. Binmore. Game Theory and the Social Contract Volume 1: Playing Fair. The MIT Press: Cambridge, MA, 1994. [4] K. Binmore. Game Theory and the Social Contract Volume 2: Just Playing. The MIT Press: Cambridge, MA, 1998. [5] V. Conitzer and T. Sandholm. Complexity of mechanism design. In Proc. UAI, Edmonton, Canada, 2002. [6] V. Conitzer and T. Sandholm. Complexity results about nash equilibria. In Proc. IJCAI-03, pp. 765-771, Acapulco, Mexico, 2003. [7] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a Nash equilibrium. In Proc. STOC, Seattle, WA, 2006. [8] E. A. Emerson. Temporal and modal logic. In Handbook of Theor. Comp. Sci. Vol. B, pages 996-1072. Elsevier, 1990. [9] E. A. Emerson and J. Y. Halpern. `Sometimes'' and `not never'' revisited: on branching time versus linear time temporal logic. Jnl. of the ACM, 33(1):151-178, 1986. [10] D. Fitoussi and M. Tennenholtz. Choosing social laws for multi-agent systems: Minimality and simplicity. Artificial Intelligence, 119(1-2):61-101, 2000. [11] M. J. Osborne and A. Rubinstein. A Course in Game Theory. The MIT Press: Cambridge, MA, 1994. [12] C. H. Papadimitriou. Computational Complexity. Addison-Wesley: Reading, MA, 1994. [13] Y. Shoham and M. Tennenholtz. On the synthesis of useful social laws for artificial agent societies. In Proc. AAAI, San Diego, CA, 1992. [14] Y. Shoham and M. Tennenholtz. On social laws for artificial agent societies: Off-line design. In Computational Theories of Interaction and Agency, pages 597-618. The MIT Press: Cambridge, MA, 1996. [15] W. van der Hoek, M. Roberts, and M. Wooldridge. Social laws in alternating time: Effectiveness, feasibility, and synthesis. Synthese, 2007. [16] M. Wooldridge and W. van der Hoek. On obligations and normative ability. Jnl. of Appl. Logic, 3:396-420, 2005. 888 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Normative System Games ABSTRACT We develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model. In the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system. A normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system. We specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. 1. INTRODUCTION Normative systems, or social laws, have proved to be an attractive approach to coordination in multi-agent systems [13, 14, 10, 15, 1]. Although the various approaches to normative systems proposed in the literature differ on technical details, they all share the same basic intuition that a normative system is a set of constraints on the behaviour of agents in the system; by imposing these constraints, it is hoped that some desirable objective will emerge. The idea of using social laws to coordinate multi-agent systems was proposed by Shoham and Tennenholtz [13, 14]; their approach was extended by van der Hoek et al. to include the idea of specifying a desirable global objective for a social law as a logical formula, with the idea being that the normative system would be regarded as successful if, after implementing it (i.e., after eliminating all forbidden actions), the objective formula was guaranteed to be satisfied in the system [15]. However, this model did not take into account the preferences of individual agents, and hence neglected to account for possible strategic behaviour by agents when deciding whether to comply with the normative system or not. This model of normative systems was further extended by attributing to each agent a single goal in [16]. However, this model was still too impoverished to capture the kinds of decision making that take place when an agent decides whether or not to comply with a social law. In reality, strategic considerations come into play: an agent takes into account not just whether the normative system would be beneficial for itself, but also whether other agents will rationally choose to participate. In this paper, we develop a model of normative systems in which agents are assumed to have multiple goals, of increasing priority. We specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures [8]: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripkebased normative systems as games, in which agents must determine whether to comply with the normative system or not. We thus provide a very natural bridge between logical structures and languages and the techniques and concepts of game theory, which have proved to be very powerful for analysing social contract-style scenarios such as normative systems [3, 4]. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. 2. KRIPKE STRUCTURES AND CTL We use Kripke structures as our basic semantic model for multiagent systems [8]. A Kripke structure is essentially a directed graph, with the vertex set S corresponding to possible states of the system being modelled, and the relation R ⊆ S × S capturing the 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS possible transitions of the system; intuitively, these transitions are caused by agents in the system performing actions, although we do not include such actions in our semantic model (see, e.g., [13, 2, 15] for related models which include actions as first class citizens). We let S0 denote the set of possible initial states of the system. Our model is intended to correspond to the well-known interleaved concurrency model from the reactive systems literature: thus an arc corresponds to the execution of an atomic action by one of the processes in the system, which we call agents. It is important to note that, in contrast to such models as [2, 15], we are therefore here not modelling synchronous action. This assumption is not in fact essential for our analysis, but it greatly simplifies the presentation. However, we find it convenient to include within our model the agents that cause transitions. We therefore assume a set A of agents, and we label each transition in R with the agent that causes the transition via a function α: R → A. Finally, we use a vocabulary Φ = {p, q, ...} of Boolean variables to express the properties of individual states S: we use a function V: S → 2Φ to label each state with the Boolean variables true (or satisfied) in that state. Collecting these components together, an agent-labelled Kripke structure (over Φ) is a 6-tuple: • S is a finite, non-empty set of states, • S0 ⊆ S (S0 = ~ ∅) is the set of initial states; • R ⊆ S × S is a total binary relation on S, which we refer to as the transition relation1; • A = {1,..., n} is a set of agents; • α: R → A labels each transition in R with an agent; and • V: S → 2Φ labels each state with the set of propositional variables true in that state. In the interests of brevity, we shall hereafter refer to an agentlabelled Kripke structure simply as a Kripke structure. A path over a transition relation R is an infinite sequence of states π = s0, s1,...which must satisfy the property that ∀ u ∈ N: (su, su +1) ∈ R. If u ∈ N, then we denote by π [u] the component indexed by u in π (thus π [0] denotes the first element, π [1] the second, and so on). A path π such that π [0] = s is an s-path. Let ΠR (s) denote the set of s-paths over R; since it will usually be clear from context, we often omit reference to R, and simply write Π (s). We will sometimes refer to and think of an s-path as a possible computation, or system evolution, from s. EXAMPLE 1. Our running example is of a system with a single non-sharable resource, which is desired by two agents. Consider the Kripke structure depicted in Figure 1. We have two states, s and t, and two corresponding Boolean variables p1 and p2, which are 1In the branching time temporal logic literature, a relation R ⊆ S × S is said to be total iff ∀ s ∃ s': (s, s') ∈ R. Note that the term "total relation" is sometimes used to refer to relations R ⊆ S × S such that for every pair of elements s, s' ∈ S we have either (s, s') ∈ R or (s', s) ∈ R; we are not using the term in this way here. It is also worth noting that for some domains, other constraints may be more appropriate than simple totality. For example, one might consider the agent totality requirement, that in every state, every agent has at least one possible transition available: ∀ s ∀ i ∈ A ∃ s': (s, s') ∈ R and α (s, s') = i. Figure 1: The resource control running example. mutually exclusive. Think of pi as meaning "agent i has currently control over the resource". Each agent has two possible actions, when in possession of the resource: either give it away, or keep it. Obviously there are infinitely many different s-paths and t - paths. Let us say that our set of initial states S0 equals {s, t}, i.e., we don't make any assumptions about who initially has control over the resource. 2.1 CTL We now define Computation Tree Logic (CTL), a branching time temporal logic intended for representing the properties of Kripke structures [8]. Note that since CTL is well known and widely documented in the literature, our presentation, though complete, will be somewhat terse. We will use CTL to express agents' goals. The syntax of CTL is defined by the following grammar: where p ∈ Φ. We denote the set of CTL formula over Φ by LΦ; since Φ is understood, we usually omit reference to it. The semantics of CTL are given with respect to the satisfaction relation "| =", which holds between pairs of the form K, s, (where K is a Kripke structure and s is a state in K), and formulae of the language. The satisfaction relation is defined as follows: K, s | = A (ϕ U ψ) iff ∀ π ∈ Π (s), ∃ u ∈ N, s.t. K, π [u] | = ψ and ∀ v, (0 ≤ v <u): K, π [v] | = ϕ K, s | = E (ϕ U ψ) iff ∃ π ∈ Π (s), ∃ u ∈ N, s.t. K, π [u] | = ψ and ∀ v, (0 ≤ v <u): K, π [v] | = ϕ The remaining classical logic connectives ("∧", "→", "↔") are assumed to be defined as abbreviations in terms of ¬, ∨, in the conventional manner. The remaining CTL temporal operators are defined: We say ϕ is satisfiable if K, s | = ϕ for some Kripke structure K and state s in K; ϕ is valid if K, s | = ϕ for all Kripke structures K and states s in K. The problem of checking whether K, s | = ϕ for given K, s, ϕ (model checking) can be done in deterministic polynomial time, while checking whether a given ϕ is satisfiable or whether ϕ is valid is EXPTIME-complete [8]. We write K | = ϕ if K, s0 | = ϕ for all s0 ∈ S0, and | = ϕ if K | = ϕ for all K. 3. NORMATIVE SYSTEMS For our purposes, a normative system is simply a set of constraints on the behaviour of agents in a system [1]. More precisely, a normative system defines, for every possible system transition, whether or not that transition is considered to be legal or not. Different normative systems may differ on whether or not a transition is legal. Formally, a normative system η (w.r.t. a Kripke structure K = ~ S, S0, R, A, α, V ~) is simply a subset of R, such that R \ η is a total relation. The requirement that R \ η is total is a reasonableness constraint: it prevents normative systems which lead to states with no successor. Let N (R) = {η: (η ⊆ R) & (R \ η is total)} be the set of normative systems over R. The intended interpretation of a normative system η is that (s, s') ∈ η means transition (s, s') is forbidden in the context of η; hence R \ η denotes the legal transitions of η. Since it is assumed η is reasonable, we are guaranteed that a legal outward transition exists for every state. We denote the empty normative system by η0, so η0 = ∅. Note that the empty normative system η0 is reasonable with respect to any transition relation R. The effect of implementing a normative system on a Kripke structure is to eliminate from it all transitions that are forbidden according to this normative system (see [15, 1]). If K is a Kripke structure, and η is a normative system over K, then K † η denotes the Kripke structure obtained from K by deleting transitions forbidden in η. Formally, if K = ~ S, S0, R, A, α, V ~, and η ∈ N (R), then let K † η = K' be the Kripke structure K' = ~ S', S0', R', A', α', V' ~ where: • S = S', S0 = S0', A = A', and V = V'; • R' = R \ η; and • α' is the restriction of α to R': j α (s, s') if (s, s') ∈ R ' Notice that for all K, we have K † η0 = K. EXAMPLE 1. (continued) When thinking in terms of fairness, it seems natural to consider normative systems η that contain (s, s) or (t, t). A normative system with (s, t) would not be fair, in the sense that AOA ¬ p1 ∨ AOA ¬ p2 holds: in all paths, from some moment on, one agent will have control forever. Let us, for later reference, fix η1 = {(s, s)}, η2 = {(t, t)}, and η3 = {(s, s), (t, t)}. Later, we will address the issue of whether or not agents should rationally choose to comply with a particular normative system. In this context, it is useful to define operators on normative systems which correspond to groups of agents "defecting" from the normative system. Formally, let K = ~ S, S0, R, A, α, V ~ be a Kripke structure, let C ⊆ A be a set of agents over K, and let η be a normative system over K. Then: • η [C denotes the normative system that is the same as η except that it only contains the arcs of η that correspond to the actions of agents in C. We call η [C the restriction of η to C, and it is defined as: η [C = {(s, s'): (s, s') ∈ η & α (s, s') ∈ C}. Thus K † (η [C) is the Kripke structure that results if only the agents in C choose to comply with the normative system. • η 1 C denotes the normative system that is the same as η except that it only contains the arcs of η that do not correspond to actions of agents in C. We call η 1 C the exclusion of C from η, and it is defined as: η 1 C = {(s, s'): (s, s') ∈ η & α (s, s') ∈ ~ C}. Thus K † (η 1 C) is the Kripke structure that results if only the agents in C choose not to comply with the normative system (i.e., the only ones who comply are those in A \ C). Note that we have η 1 C = η [(A \ C) and η [C = η 1 (A \ C). 4. GOALS AND UTILITIES Next, we want to be able to capture the goals that agents have, as these will drive an agent's strategic considerations--particularly, as we will see, considerations about whether or not to comply with a normative system. We will model an agent's goals as a prioritised list of CTL formulae, representing increasingly desired properties that the agent wishes to hold. The intended interpretation of such a goal hierarchy γi for agent i ∈ A is that the "further up the hierarchy" a goal is, the more it is desired by i. Note that we assume that if an agent can achieve a goal at a particular level in its goal hierarchy, then it is unconcerned about goals lower down the hierarchy. Formally, a goal hierarchy, γ, (over a Kripke structure K) is a finite, non-empty sequence of CTL formulae in which, by convention, ϕ0 =. We use a natural number indexing notation to extract the elements of a goal hierarchy, so if γ = (ϕ0, ϕ1,..., ϕk) then γ [0] = ϕ0, γ [1] = ϕ1, and so on. We denote the largest index of any element in γ by | γ |. A particular Kripke structure K is said to satisfy a goal at index x in goal hierarchy γ if K | = γ [x], i.e., if γ [x] is satisfied in all initial states S0 of K. An obvious potential property of goal hierarchies is monotonicity: where goals at higher levels in the hierarchy logically imply those at lower levels in the hierarchy. Formally, a goal hierarchy γ is monotonic if for all x ∈ {1,..., | γ |} ⊆ N, we have | = γ [x] → γ [x − 1]. The simplest type of monotonic goal hierarchy is where γ [x + 1] = γ [x] ∧ ψx +1 for some ψx +1, so at each successive level of the hierarchy, we add new constraints to the goal of the previous level. Although this is a natural property of many goal hierarchies, it is not a property we demand of all goal hierarchies. EXAMPLE 1. (continued) Suppose the agents have similar, but opposing goals: each agent i wants to keep the source as often and long as possible for himself. Define each agent's goal hierarchy as: The most desired goal of agent i is to, in every computation, always have the resource, pi (this is expressed in ϕi8). Thanks to our reasonableness constraint, this goal implies ϕi7 which says that, no matter how the computation paths evolve, it will always be that all The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 883 continuations will hit a point in which pi, and, moreover, there is a continuation in which pi always holds. Goal cpi6 is a fairness constraint implied by it. Note that A ♦ pi says that every computation eventually reaches a pi state. This may mean that after pi has happened, it will never happen again. cpi6 circumvents this: it says that, no matter where you are, there should be a future pi state. The goal cpi5 is like the strong goal cpi8 but it accepts that this is only achieved in some computation, eventually. cpi4 requires that in every path, there is always a continuation that eventually gives pi. Goal cpi3 says that pi should be true on some branch, from some moment on. It implies cpi2 which expresses that there is a computation such that everywhere during it, it is possible to choose a continuation that eventually satisfies pi. This implies cpi 1, which says that pi should at least not be impossible. If we even drop that demand, we have the trivial goal cpi 0. We remark that it may seem more natural to express a fairness constraint cpi6 as A ♦ pi. However, this is not a proper CTL formula. It is in fact a formula in CTL' [9], and in this logic, the two expressions would be equivalent. However, our basic complexity results in the next sections would not hold for the richer language CTL' 2, and the price to pay for this is that we have to formulate our desired goals in a somewhat more cumbersome manner than we might ideally like. Of course, our basic framework does not demand that goals are expressed in CTL; they could equally well be expressed in CTL' or indeed ATL [2] (as in [15]). We comment on the implications of alternative goal representations at the conclusion of the next section. A multi-agent system collects together a Kripke structure (representing the basic properties of a system under consideration: its state space, and the possible state transitions that may occur in it), together with a goal hierarchy, one for each agent, representing the aspirations of the agents in the system. Formally, a multi-agent system, M, is an (n + 1) - tuple: where K is a Kripke structure, and for each agent i in K, γi is a goal hierarchy over K. 4.1 The Utility of Normative Systems We can now define the utility of a Kripke structure for an agent. The idea is that the utility of a Kripke structure is the highest index of any goal that is guaranteed for that agent in the Kripke structure. We make this precise in the function ui (·): Note that using these definitions of goals and utility, it never makes sense to have a goal cp at index n if there is a logically weaker goal 0 at index n + k in the hierarchy: by definition of utility, it could never be n for any structure K. EXAMPLE 1. (continued) Let M = (K, γ1, γ2) be the multiagent system of Figure 1, with γ1 and γ2 as defined earlier in this example. Recall that we have defined S0 as {s, t}. Then, u1 (K) = u2 (K) = 4: goal cp4 is true in S 0, but cp5 is not. To see that cp24 = A E ♦ p2 is true in s for instance: note that on ever path it is always the case that there is a transition to t, in which p2 is true. Notice that since for any goal hierarchy γi we have γ [0] = T, then for all Kripke structures, ui (K) is well defined, with ui (K)> 2 CTL' model checking is PSPACE-complete, and hence much worse (under standard complexity theoretic assumptions) than model checking CTL [8]. Figure 2: Benefits of implementing a normative system 77 (left) and pay-offs for the game ΣM. 0. Note that this is an ordinal utility measure: it tells us, for any given agent, the relative utility of different Kripke structures, but utility values are not on some standard system-wide scale. The fact that ui (K1)> ui (K2) certainly means that i strictly prefers K1 over K2, but the fact that ui (K)> uj (K) does not mean that i values K more highly than j. Thus, it does not make sense to compare utility values between agents, and so for example, some system wide measures of utility, (notably those measures that aggregate individual utilities, such as social welfare), do not make sense when applied in this setting. However, as we shall see shortly, other measures--such as Pareto efficiency--can be usefully applied. There are other representations for goals, which would allow us to define cardinal utilities. The simplest would be to specify goals γ for an agent as a finite, non-empty, one-to-one relation: γ C G xR. We assume that the x values in pairs (cp, x) E γ are specified so that x for agent i means the same as x for agent j, and so we have cardinal utility. We then define the utility for i of a Kripke structure K asui (K) = max {x: (cp, x) E γi & K | = cp}. The results of this paper in fact hold irrespective of which of these representations we actually choose; we fix upon the goal hierarchy approach in the interests of simplicity. Our next step is to show how, in much the same way, we can lift the utility function from Kripke structures to normative systems. Suppose we are given a multi-agent system M = (K, γ1,..., γn) and an associated normative system 77 over K. Let for agent i, Si (K, K') be the difference in his utility when moving from K to K': Si (K, K') = ui (K') − ui (K). Then the utility of 77 to agent i wrt K is Si (K, K † 77). We will sometimes abuse notation and just write Si (K, 77) for this, and refer to it as the benefit for agent i of implementing 77 in K. Note that this benefit can be negative. Summarising, the utility of a normative system to an agent is the difference between the utility of the Kripke structure in which the normative system was implemented and the original Kripke structure. If this value is greater than 0, then the agent would be better off if the normative system were imposed, while if it is less than 0 then the agent would be worse off if 77 were imposed than in the original system. We say 77 is individually rational for i wrt K if Si (K, 77)> 0, and individually rational simpliciter if 77 is individually rational for every agent. A social system now is a pair 4.2 Universal and Existential Goals 884 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Keeping in mind that a norm η restricts the possible transitions of the model under consideration, we make the following observation, borrowing from [15]. Some classes of goals are monotonic or anti-monotonic with respect to adding additional constraints to a system. Let us therefore define two fragments of the language of CTL: the universal language Lu with typical element μ, and the existential fragment Le with typical element ε. Let us say, for two Kripke structures K1 = (S, S0, R1, A, α, V) and K2 = (S, S0, R2, A, α, V) that K1 is a subsystem of K2 and K2 is a supersystem of K1, written K1 C K2 iff R1 ⊆ R2. Note that typically K † η C K. Then we have (cf. [15]). This has the following effect on imposing a new norm: 1. Suppose agent i's utility ui (K) is n, and γi [n] E Lu, (i.e., γi [n] is a universal formula). Then, for any normative system η, δi (K, η)> 0. 2. Suppose agent i's utility ui (K † η) is n, and γi [n] is an existential formula ε. Then, δi (K † η, K)> 0. Corollary 1's first item says that an agent whose current maximal goal in a system is a universal formula, need never fear the imposition of a new norm η. The reason is that his current goal will at least remain true (in fact a goal higher up in the hierarchy may become true). It follows from this that an agent with only universal goals can only gain from the imposition of normative systems η. The opposite is true for existential goals, according to the second item of the corollary: it can never be bad for an agent to "undo" a norm η. Hence, an agent with only existential goals might well fear any norm η. However, these observations implicitly assume that all agents in the system will comply with the norm. Whether they will in fact do so, of course, is a strategic decision: it partly depends on what the agent thinks that other agents will do. This motivates us to consider normative system games. 5. NORMATIVE SYSTEM GAMES We now have a principled way of talking about the utility of normative systems for agents, and so we can start to apply the technical apparatus of game theory to analyse them. Suppose we have a multi-agent system M = (K, γ1,..., γn) and a normative system η over K. It is proposed to the agents in M that η should be imposed on K, (typically to achieve some coordination objective). Our agent--let's say agent i--is then faced with a choice: should it comply with the strictures of the normative system, or not? Note that this reasoning takes place before the agent is "in" the system--it is a design time consideration. We can understand the reasoning here as a game, as follows. A game in strategic normal form (cf. [11, p. 11]) is a structure: • A0 = {1,..., n} is a set of agents--the players of the game; • Si is the set of strategies for each agent i E A0 (a strategy for an agent i is nothing else than a choice between alternative actions); and • Ui: (S1 x · · · x Sn)--* R is the utility function for agent i E A0, which assigns a utility to every combination of strategy choices for the agents. Now, suppose we are given a social system Σ = (M, η) where M = (K, γ1,..., γn). Then we can associate a game--the normative system game--0Σ with Σ, as follows. The agents A0 in 0Σ are as in Σ. Each agent i has just two strategies available to it: • C--comply (cooperate) with the normative system; and • D--do not comply with (defect from) the normative system. If S is a tuple of strategies, one for each agent, and x E {C, D}, then we denote by A0xS the subset of agents that play strategy x in S. Hence, for a social system Σ = (M, η), the normative system η [A0CS only implements the restrictions for those agents that choose to cooperate in 0Σ. Note that this is the same as η 1 A0DS: the normative system that excludes all the restrictions of agents that play D in 0Σ. We then define the utility functions Ui for each So, for example, if SD is a collection of strategies in which every agent defects (i.e., does not comply with the norm), then Ui (SD) = δi (K, (η 1 A0D SD)) = ui (K † η0) − ui (K) = 0. In the same way, if SC is a collection of strategies in which every agent cooperates (i.e., complies with the norm), then Ui (SC) = δi (K, (η 1 A0DSC)) = ui (K † (η 1 ∅)) = ui (K † η). We can now start to investigate some properties of normative system games. 5.1 Individually Rational Normative Systems A normative system is individually rational if every agent would fare better if the normative system were imposed than otherwise. This is a necessary, although not sufficient condition on a norm to expect that everybody respects it. Note that η3 of our example is individually rational for both 1 and 2, although this is not a stable situation: given that the other plays C, i is better of by playing D. We can easily characterise individually rationality with respect to the corresponding game in strategic form, as follows. Let Σ = (M, η) be a social system. Then the following are equivalent: The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 885 Figure 3: The Kripke structure produced in the reduction of Theorem 2; all transitions are associated with agent 1, the only initial state is s0. 1. η is individually rational in M; 2. Vi E A9, Ui (SC)> Ui (SD) in the game 9Σ. The decision problem associated with individually rational normative systems is as follows: INDIVIDUALLY RATIONAL NORMATIVE SYSTEM (IRNS): Given: Multi-agent system M. Question: Does there exist an individually rational normative system for M? THEOREM 2. IRNS is NP-complete, even in one-agent systems. PROOF. For membership of NP, guess a normative system η, and verify that it is individually rational. Since η C R, we will be able to guess it in nondeterministic polynomial time. To verify that it is individually rational, we check that for all i, we have ui (K t η)> ui (K); computing K t η is just set subtraction, so can be done in polynomial time, while determining the value of ui (K) for any K can be done with a polynomial number of model checking calls, each of which requires only time polynomial in the K and γ. Hence verifying that ui (K t η)> ui (K) requires only polynomial time. For NP-hardness, we reduce SAT [12, p. 77]. Given a SAT instance ϕ over Boolean variables x1,..., xk, we produce an instance of IRNS as follows. First, we define a single agent A = {1}. For each Boolean variable xi in the SAT instance, we create two Boolean variables t (xi) and f (xi) in the IRNS instance. We then create a Kripke structure Kϕ with 2k + 1 states, as shown in Figure 3: arcs in this graph correspond to transitions in Kϕ. Let ϕ * be the result of systematically substituting for every Boolean variable xi in ϕ the CTL expression (E ❢ t (xi)). Next, consider the following formulae: We then define the goal hierarchy for all agent 1 as follows: We claim there is an individually rational normative system for the instance so constructed iff ϕ is satisfiable. First, notice that any individually rational normative system must force γ1 [1] to be true, since in the original system, we do not have γ1 [1]. For the = * direction, if there is an individually rational normative system η, then we construct a satisfying assignment for ϕ by considering the arcs that are forbidden by η: formula (1) ensures that we must forbid an arc to either a t (xi) or a f (xi) state for all variables xi, but (2) ensures that we cannot forbid arcs to both. So, if we forbid an arc to a t (xi) state then in the corresponding valuation for ϕ we make xi false, while if we forbid an arc to a f (xi) state then we make xi true. The fact that ϕ * is part of the goal ensures that the normative system is indeed a valuation for ϕ. For, note that for any satisfying valuation for ϕ we can construct an individually rational normative system η, as follows: if the valuation makes xi true, we forbid the arc to the f (xi) state, while if the valuation makes xi false, we forbid the arc to the t (xi) state. The resulting normative system ensures γ1 [1], and is thus individually rational. Notice that the Kripke structure constructed in the reduction contains just a single agent, and so the Theorem is proven. 5.2 Pareto Efficient Normative Systems Pareto efficiency is a basic measure of how good a particular outcome is for a group of agents [11, p. 7]. Intuitively, an outcome is Pareto efficient if there is no other outcome that makes every agent better off. In our framework, suppose we are given a social system Σ = (M, η), and asked whether η is Pareto efficient. This amounts to asking whether or not there is some other normative system η' such that every agent would be better off under η' than with η. If η' makes every agent better off than η, then we say η' Pareto dominates η. The decision problem is as follows: PARETO EFFICIENT NORMATIVE SYSTEM (PENS): Given: Multi-agent system M and normative system η over M. Question: Is η Pareto efficient for M? THEOREM 3. PENS is co-NP-complete, even for one-agent systems. PROOF. Let M and η be as in the Theorem. We show that the complement problem to PENS, which we refer to as PARETO DOMINATED, is NP-complete. In this problem, we are given M and η, and we are asked whether η is Pareto dominated, i.e., whether or not there exists some η' over M such that η' makes every agent better off than η. For membership of NP, simply guess a normative system η', and verify that for all i E A, we have ui (K t η')> ui (K t η)--verifying requires a polynomial number of model checking problems, each of which takes polynomial time. Since η' C R, the normative system can be guessed in non-deterministic polynomial time. For NP-hardness, we reduce IRNS, which we know to be NPcomplete from Theorem 2. Given an instance M of IRNS, we let M in the instance of PARETO DOMINATED be as in the IRNS instance, and define the normative system for PARETO DOMINATED to be η0, the empty normative system. Now, it is straightforward that there exists a normative system η' which Pareto dominates η0 in M iff there exist an individually rational normative system in M. Since the complement problem is NP-complete, it follows that PENS is co-NP-complete. Table 1: Utilities for all possible norms in our example How about Pareto efficient norms for our toy example? Settling this question amounts to finding the dominant normative systems among 770 = 770, 771, 772, 773 defined before, and 774 = {(s, t)}, 775 = {(t, s)}, 776 = {(s, s), (t, s)}, 777 = {(t, t), (s, t)} and 778 = {(s, t), (t, s)}. The utilities for each system are given in Table 1. From this, we infer that the Pareto efficient norms are 771, 772, 773, 776 and 777. Note that 778 prohibits the resource to be passed from one agent to another, and this is not good for any agent (since we have chosen S0 = {s, t}, no agent can be sure to ever get the resource, i.e., goal Wi1 is not true in K † 778). 5.3 Nash Implementation Normative Systems The most famous solution concept in game theory is of course Nash equilibrium [11, p. 14]. A collection of strategies, one for each agent, is said to form a Nash equilibrium if no agent can benefit by doing anything other than playing its strategy, under the assumption that the other agents play theirs. Nash equilibria are important because they provide stable solutions to the problem of what strategy an agent should play. Note that in our toy example, although 773 is individually rational for each agent, it is not a Nash equilibrium, since given this norm, it would be beneficial for agent 1 to deviate (and likewise for 2). In our framework, we say a social system Σ = (M, 77) (where 77 = ~ 770) is a Nash implementation if SC (i.e., everyone complying with the normative system) forms a Nash equilibrium in the game 9Σ. The intuition is that if Σ is a Nash implementation, then complying with the normative system is a reasonable solution for all concerned: there can be no benefit to deviating from it, indeed, there is a positive incentive for all to comply. If Σ is not a Nash implementation, then the normative system is unlikely to succeed, since compliance is not rational for some agents. (Our choice of terminology is deliberately chosen to reflect the way the term "Nash implementation" is used in implementation theory, or mechanism design [11, p. 185], where a game designer seeks to achieve some outcomes by designing the rules of the game such that these outcomes are equilibria.) NASH IMPLEMENTATION (NI): Given: Multi-agent system M. Question: Does there exist a non-empty normative system 77 over M such that (M, 77) forms a Nash implementation? Verifying that a particular social system forms a Nash implementation can be done in polynomial time--it amounts to checking: This, clearly requires only a polynomial number of model checking calls, each of which requires only polynomial time. THEOREM 4. The NI problem is NP-complete, even for twoagent systems. PROOF. For membership of NP, simply guess a normative system 77 and check that it forms a Nash implementation; since 77 C R, guessing can be done in non-deterministic polynomial time, and as Figure 4: Reduction for Theorem 4. we argued above, verifying that it forms a Nash implementation can be done in polynomial time. For NP-hardness, we reduce SAT. Suppose we are given a SAT instance W over Boolean variables x1,..., xk. Then we construct an instance of NI as follows. We create two agents, A = {1, 2}. For each Boolean variable xi we create two Boolean variables, t (xi) and f (xi), and we then define a Kripke structure as shown in Figure 4, with s0 being the only initial state; the arc labelling in Figure 4 gives the α function, and each state is labelled with the propositions that are true in that state. For each Boolean variable xi, we define the formulae xi ~ and xi ⊥ as follows: Let W * be the formula obtained from W by systematically substituting xi ~ for xi. Each agent has three goals: - yi [0] = T for both i E {1, 2}, while and finally, for both agents, - yi [2] being the conjunction of the following formulae: The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 887 We denote the multi-agent system so constructed by Mϕ. Now, we prove that the SAT instance W is satisfiable iff Mϕ has a Nash implementation normative system: For the = * direction, suppose W is satisfiable, and let X be a satisfying valuation, i.e., a set of Boolean variables making W true. We can extract from X a Nash implementation normative system 77 as follows: if xi E X, then 77 includes the arc from so to the state in which f (xi) is true, and also includes the arc from s (2k + 1) to the state in which f (xi) is true; if xi E ~ X, then 77 includes the arc from so to the state in which t (xi) is true, and also includes the arc from s (2k + 1) to the state in which t (xi) is true. No other arcs, apart from those so defined, as included in 77. Notice that 77 is individually rational for both agents: if they both comply with the normative system, then they will have their - yi [2] goals achieved, which they do not in the basic system. To see that 77 forms a Nash implementation, observe that if either agent defects from 77, then neither will have their - yi [2] goals achieved: agent 1 strictly prefers (C, C) over (D, C), and agent 2 strictly prefers (C, C) over (C, D). For the - t = direction, suppose there exists a Nash implementation normative system 77, in which case 77 = ~ 0. Then W is satisfiable; for suppose not. Then the goals - yi [2] are not achievable by any normative system, (by construction). Now, since 77 must forbid at least one transition, then at least one agent would fail to have its - yi [1] goal achieved if it complied, so at least one would do better by defecting, i.e., not complying with 77. But this contradicts the assumption that 77 is a Nash implementation, i.e., that (C, C) forms a Nash equilibrium. This result is perhaps of some technical interest beyond the specific concerns of the present paper, since it is related to two problems that are of wider interest: the complexity of mechanism design [5], and the complexity of computing Nash equilibria [6, 7] 5.4 Richer Goal Languages It is interesting to consider what happens to the complexity of the problems we consider above if we allow richer languages for goals: in particular, CTL' [9]. The main difference is that determining ui (K) in a given multi-agent system M when such a goal language is used involves solving a PSPACE-complete problem (since model checking for CTL' is PSPACE-complete [8]). In fact, it seems that for each of the three problems we consider above, the corresponding problem under the assumption of a CTL' representation for goals is also PSPACE-complete. It cannot be any easier, since determining the utility of a particular Kripke structure involves solving a PSPACE-complete problem. To see membership in PSPACE we can exploit the fact that PSPACE = NPSPACE [12, p. 150], and so we can "guess" the desired normative system, applying a PSPACE verification procedure to check that it has the desired properties. 6. CONCLUSIONS Social norms are supposed to restrict our behaviour. Of course, such a restriction does not have to be bad: the fact that an agent's behaviour is restricted may seem a limitation, but there may be benefits if he can assume that others will also constrain their behaviour. The question then, for an agent is, how to be sure that others will comply with a norm. And, for a system designer, how to be sure that the system will behave socially, that is, according to its norm. Game theory is a very natural tool to analyse and answer these questions, which involve strategic considerations, and we have proposed a way to translate key questions concerning logic-based normative systems to game theoretical questions. We have proposed a logical framework to reason about such scenarios, and we have given some computational costs for settling some of the main questions about them. Of course, our approach is in many senses open for extension or enrichment. An obvious issue is to consider is the complexity of the questions we give for more practical representations of models (cf. [1]), and to consider other classes of allowable goals.
Normative System Games ABSTRACT We develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model. In the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system. A normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system. We specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. 1. INTRODUCTION Normative systems, or social laws, have proved to be an attractive approach to coordination in multi-agent systems [13, 14, 10, 15, 1]. Although the various approaches to normative systems proposed in the literature differ on technical details, they all share the same basic intuition that a normative system is a set of constraints on the behaviour of agents in the system; by imposing these constraints, it is hoped that some desirable objective will emerge. The idea of using social laws to coordinate multi-agent systems was proposed by Shoham and Tennenholtz [13, 14]; their approach was extended by van der Hoek et al. to include the idea of specifying a desirable global objective for a social law as a logical formula, with the idea being that the normative system would be regarded as successful if, after implementing it (i.e., after eliminating all forbidden actions), the objective formula was guaranteed to be satisfied in the system [15]. However, this model did not take into account the preferences of individual agents, and hence neglected to account for possible strategic behaviour by agents when deciding whether to comply with the normative system or not. This model of normative systems was further extended by attributing to each agent a single goal in [16]. However, this model was still too impoverished to capture the kinds of decision making that take place when an agent decides whether or not to comply with a social law. In reality, strategic considerations come into play: an agent takes into account not just whether the normative system would be beneficial for itself, but also whether other agents will rationally choose to participate. In this paper, we develop a model of normative systems in which agents are assumed to have multiple goals, of increasing priority. We specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures [8]: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripkebased normative systems as games, in which agents must determine whether to comply with the normative system or not. We thus provide a very natural bridge between logical structures and languages and the techniques and concepts of game theory, which have proved to be very powerful for analysing social contract-style scenarios such as normative systems [3, 4]. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. 2. KRIPKE STRUCTURES AND CTL 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS 2.1 CTL 3. NORMATIVE SYSTEMS 4. GOALS AND UTILITIES The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 883 4.1 The Utility of Normative Systems 4.2 Universal and Existential Goals 884 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. NORMATIVE SYSTEM GAMES 5.1 Individually Rational Normative Systems The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 885 INDIVIDUALLY RATIONAL NORMATIVE SYSTEM (IRNS): 5.2 Pareto Efficient Normative Systems PARETO EFFICIENT NORMATIVE SYSTEM (PENS): 5.3 Nash Implementation Normative Systems The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 887 5.4 Richer Goal Languages 6. CONCLUSIONS Social norms are supposed to restrict our behaviour. Of course, such a restriction does not have to be bad: the fact that an agent's behaviour is restricted may seem a limitation, but there may be benefits if he can assume that others will also constrain their behaviour. The question then, for an agent is, how to be sure that others will comply with a norm. And, for a system designer, how to be sure that the system will behave socially, that is, according to its norm. Game theory is a very natural tool to analyse and answer these questions, which involve strategic considerations, and we have proposed a way to translate key questions concerning logic-based normative systems to game theoretical questions. We have proposed a logical framework to reason about such scenarios, and we have given some computational costs for settling some of the main questions about them. Of course, our approach is in many senses open for extension or enrichment. An obvious issue is to consider is the complexity of the questions we give for more practical representations of models (cf. [1]), and to consider other classes of allowable goals.
Normative System Games ABSTRACT We develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model. In the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system. A normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system. We specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. 1. INTRODUCTION Normative systems, or social laws, have proved to be an attractive approach to coordination in multi-agent systems [13, 14, 10, 15, 1]. Although the various approaches to normative systems proposed in the literature differ on technical details, they all share the same basic intuition that a normative system is a set of constraints on the behaviour of agents in the system; by imposing these constraints, it is hoped that some desirable objective will emerge. However, this model did not take into account the preferences of individual agents, and hence neglected to account for possible strategic behaviour by agents when deciding whether to comply with the normative system or not. This model of normative systems was further extended by attributing to each agent a single goal in [16]. However, this model was still too impoverished to capture the kinds of decision making that take place when an agent decides whether or not to comply with a social law. In reality, strategic considerations come into play: an agent takes into account not just whether the normative system would be beneficial for itself, but also whether other agents will rationally choose to participate. In this paper, we develop a model of normative systems in which agents are assumed to have multiple goals, of increasing priority. We specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures [8]: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripkebased normative systems as games, in which agents must determine whether to comply with the normative system or not. We thus provide a very natural bridge between logical structures and languages and the techniques and concepts of game theory, which have proved to be very powerful for analysing social contract-style scenarios such as normative systems [3, 4]. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete. 6. CONCLUSIONS Social norms are supposed to restrict our behaviour. The question then, for an agent is, how to be sure that others will comply with a norm. And, for a system designer, how to be sure that the system will behave socially, that is, according to its norm. Game theory is a very natural tool to analyse and answer these questions, which involve strategic considerations, and we have proposed a way to translate key questions concerning logic-based normative systems to game theoretical questions. Of course, our approach is in many senses open for extension or enrichment.
I-74
On the relevance of utterances in formal inter-agent dialogues
Work on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues. This work has established the space in which agents are permitted to interact through dialogues. Recently, there has been increasing interest in the mechanisms agents might use to choose how to act -- the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue. Key in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks. Here we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring.
[ "relev", "dialogu", "multiag system", "graph", "node", "tree", "statu", "argument", "leaf" ]
[ "P", "P", "U", "U", "U", "U", "U", "U", "U" ]
On the relevance of utterances in formal inter-agent dialogues Simon Parsons1 Peter McBurney2 1 Department of Computer & Information Science Brooklyn College, City University of New York Brooklyn NY 11210 USA {parsons,sklar}@sci. brooklyn.cuny.edu Elizabeth Sklar1 Michael Wooldridge2 2 Department of Computer Science University of Liverpool Liverpool L69 7ZF UK {p.j.mcburney,m.j.wooldridge}@csc. liv.ac.uk ABSTRACT Work on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues. This work has established the space in which agents are permitted to interact through dialogues. Recently, there has been increasing interest in the mechanisms agents might use to choose how to act - the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue. Key in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks. Here we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence: Coherence & co-ordination; languages & structures; multiagent systems. General Terms Design, languages, theory. 1. INTRODUCTION Finding ways for agents to reach agreements in multiagent systems is an area of active research. One mechanism for achieving agreement is through the use of argumentation - where one agent tries to convince another agent of something during the course of some dialogue. Early examples of argumentation-based approaches to multiagent agreement include the work of Dignum et al. [7], Kraus [14], Parsons and Jennings [16], Reed [23], Schroeder et al. [25] and Sycara [26]. The work of Walton and Krabbe [27], popularised in the multiagent systems community by Reed [23], has been particularly influential in the field of argumentation-based dialogue. This work influenced the field in a number of ways, perhaps most deeply in framing multi-agent interactions as dialogue games in the tradition of Hamblin [13]. Viewing dialogues in this way, as in [2, 21], provides a powerful framework for analysing the formal properties of dialogues, and for identifying suitable protocols under which dialogues can be conducted [18, 20]. The dialogue game view overlaps with work on conversation policies (see, for example, [6, 10]), but differs in considering the entire dialogue rather than dialogue segments. In this paper, we extend the work of [18] by considering the role of relevance - the relationship between utterances in a dialogue. Relevance is a topic of increasing interest in argumentation-based dialogue because it relates to the scope that an agent has for applying strategic manoeuvering to obtain the outcomes that it requires [19, 22, 24]. Our work identifes the limits on such rhetorical manoeuvering, showing when it can and cannot have an effect. 2. BACKGROUND We begin by introducing the formal system of argumentation that underpins our approach, as well as the corresponding terminology and notation, all taken from [2, 8, 17]. A dialogue is a sequence of messages passed between two or more members of a set of agents A. An agent α maintains a knowledge base, Σα, containing formulas of a propositional language L and having no deductive closure. Agent α also maintains the set of its past utterances, called the commitment store, CSα. We refer to this as an agent``s public knowledge, since it contains information that is shared with other agents. In contrast, the contents of Σα are private to α. Note that in the description that follows, we assume that is the classical inference relation, that ≡ stands for logical equivalence, and we use Δ to denote all the information available to an agent. Thus in a dialogue between two agents α and β, Δα = Σα ∪ CSα ∪ CSβ, so the commitment store CSα can be loosely thought of as a subset of Δα consisting of the assertions that have been made public. In some dialogue games, such as those in [18] anything in CSα is either in Σα or can be derived from it. In other dialogue games, such as 1006 978-81-904262-7-5 (RPS) c 2007 IFAAMAS those in [2], CSα may contain things that cannot be derived from Σα. Definition 2.1. An argument A is a pair (S, p) where p is a formula of L and S a subset of Δ such that (i) S is consistent; (ii) S p; and (iii) S is minimal, so no proper subset of S satisfying both (1) and (2) exists. S is called the support of A, written S = Support(A) and p is the conclusion of A, written p = Conclusion(A). Thus we talk of p being supported by the argument (S, p). In general, since Δ may be inconsistent, arguments in A(Δ), the set of all arguments which can be made from Δ, may conflict, and we make this idea precise with the notion of undercutting: Definition 2.2. Let A1 and A2 be arguments in A(Δ). A1 undercuts A2 iff ∃¬p ∈ Support(A2) such that p ≡ Conclusion(A1). In other words, an argument is undercut if and only if there is another argument which has as its conclusion the negation of an element of the support for the first argument. To capture the fact that some beliefs are more strongly held than others, we assume that any set of beliefs has a preference order over it. We consider all information available to an agent, Δ, to be stratified into non-overlapping subsets Δ1, ... , Δn such that beliefs in Δi are all equally preferred and are preferred over elements in Δj where i > j. The preference level of a nonempty subset S ⊂ Δ, where different elements s ∈ S may belong to different layers Δi, is valued at the highest numbered layer which has a member in S and is referred to as level(S). In other words, S is only as strong as its weakest member. Note that the strength of a belief as used in this context is a separate concept from the notion of support discussed earlier. Definition 2.3. Let A1 and A2 be arguments in A(Δ). A1 is preferred to A2 according to Pref , A1 Pref A2, iff level(Support(A1)) > level(Support(A2)). If A1 is preferred to A2, we say that A1 is stronger than A2. We can now define the argumentation system we will use: Definition 2.4. An argumentation system is a triple: A(Δ), Undercut, Pref such that: • A(Δ) is a set of the arguments built from Δ, • Undercut is a binary relation representing the defeat relationship between arguments, Undercut ⊆ A(Δ) × A(Δ), and • Pref is a pre-ordering on A(Δ) × A(Δ). The preference order makes it possible to distinguish different types of relations between arguments: Definition 2.5. Let A1, A2 be two arguments of A(Δ). • If A2 undercuts A1 then A1 defends itself against A2 iff A1 Pref A2. Otherwise, A1 does not defend itself. • A set of arguments A defends A1 iff for every A2 that undercuts A1, where A1 does not defend itself against A2, then there is some A3 ∈ A such that A3 undercuts A2 and A2 does not defend itself against A3. We write AUndercut,Pref to denote the set of all non-undercut arguments and arguments defending themselves against all their undercutting arguments. The set A(Δ) of acceptable arguments of the argumentation system A(Δ), Undercut, Pref is [1] the least fixpoint of a function F: A ⊆ A(Δ) F(A) = {(S, p) ∈ A(Δ) | (S, p) is defended by A} Definition 2.6. The set of acceptable arguments for an argumentation system A(Δ), Undercut, Pref is recursively defined as: A(Δ) = [ Fi≥0(∅) = AUndercut,Pref ∪ h[ Fi≥1(AUndercut,Pref ) i An argument is acceptable if it is a member of the acceptable set, and a proposition is acceptable if it is the conclusion of an acceptable argument. An acceptable argument is one which is, in some sense, proven since all the arguments which might undermine it are themselves undermined. Definition 2.7. If there is an acceptable argument for a proposition p, then the status of p is accepted, while if there is not an acceptable argument for p, the status of p is not accepted. Argument A is said to affect the status of another argument A if changing the status of A will change the status of A . 3. DIALOGUES Systems like those described in [2, 18], lay down sets of locutions that agents can make to put forward propositions and the arguments that support them, and protocols that define precisely which locutions can be made at which points in the dialogue. We are not concerned with such a level of detail here. Instead we are interested in the interplay between arguments that agents put forth. As a result, we will consider only that agents are allowed to put forward arguments. We do not discuss the detail of the mechanism that is used to put these arguments forward - we just assume that arguments of the form (S, p) are inserted into an agent``s commitment store where they are then visible to other agents. We then have a typical definition of a dialogue: Definition 3.1. A dialogue D is a sequence of moves: m1, m2, ... , mn. A given move mi is a pair α, Ai where Ai is an argument that α places into its commitment store CSα. Moves in an argumentation-based dialogue typically attack moves that have been made previously. While, in general, a dialogue can include moves that undercut several arguments, in the remainder of this paper, we will only consider dialogues that put forward moves that undercut at most one argument. For now we place no additional constraints on the moves that make up a dialogue. Later we will see how different restrictions on moves lead to different kinds of dialogue. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1007 The sequence of arguments put forward in the dialogue is determined by the agents who are taking part in the dialogue, but they are usually not completely free to choose what arguments they make. As indicated earlier, their choice is typically limited by a protocol. If we write the sequence of n moves m1, m2, ... , mn as mn, and denote the empty sequence as m0, then we can define a profocol in the following way: Definition 3.2. A protocol P is a function on a sequence of moves mi in a dialogue D that, for all i ≥ 0, identifies a set of possible moves Mi+1 from which the mi+1th move may be drawn: P : mi → Mi+1 In other words, for our purposes here, at every point in a dialogue, a protocol determines a set of possible moves that agents may make as part of the dialogue. If a dialogue D always picks its moves m from the set M identified by protocol P, then D is said to conform to P. Even if a dialogue conforms to a protocol, it is typically the case that the agent engaging in the dialogue has to make a choice of move - it has to choose which of the moves in M to make. This excercise of choice is what we refer to as an agent``s use of rhetoric (in its oratorical sense of influencing the thought and conduct of an audience). Some of our results will give a sense of how much scope an agent has to exercise rhetoric under different protocols. As arguments are placed into commitment stores, and hence become public, agents can determine the relationships between them. In general, after several moves in a dialogue, some arguments will undercut others. We will denote the set of arguments {A1, A2, ... , Aj} asserted after moves m1, m2, ... , mj of a dialogue to be Aj - the relationship of the arguments in Aj can be described as an argumentation graph, similar to those described in, for example, [3, 4, 9]: Definition 3.3. An argumentation graph AG over a set of arguments A is a directed graph (V, E) such that every vertex v, v ∈ V denotes one argument A ∈ A, every argument A is denoted by one vertex v, and every directed edge e ∈ E from v to v denotes that v undercuts v . We will use the term argument graph as a synonym for argumentation graph. Note that we do not require that the argumentation graph is connected. In other words the notion of an argumentation graph allows for the representation of arguments that do not relate, by undercutting or being undercut, to any other arguments (we will come back to this point very shortly). We adapt some standard graph theoretic notions in order to describe various aspects of the argumentation graph. If there is an edge e from vertex v to vertex v , then v is said to be the parent of v and v is said to be the child of v. In a reversal of the usual notion, we define a root of an argumentation graph1 as follows: Definition 3.4. A root of an argumentation graph AG = (V, E) is a node v ∈ V that has no children. Thus a root of a graph is a node to which directed edges may be connected, but from which no directed edges connect to other nodes. Thus a root is a node representing an 1 Note that we talk of a root rather than the root - as defined, an argumentation graph need not be a tree. v v'' Figure 1: An example argument graph argument that is undercut, but which itself does no undercutting. Similarly: Definition 3.5. A leaf of an argumentation graph AG = (V, E) is a node v ∈ V that has no parents. Thus a leaf in an argumentation graph represents an argument that undercuts another argument, but does no undercutting. Thus in Figure 1, v is a root, and v is a leaf. The reason for the reversal of the usual notions of root and leaf is that, as we shall see, we will consider dialogues to construct argumentation graphs from the roots (in our sense) to the leaves. The reversal of the terminology means that it matches the natural process of tree construction. Since, as described above, argumentation graphs are allowed to be not connected (in the usual graph theory sense), it is helpful to distinguish nodes that are connected to other nodes, in particular to the root of the tree. We say that node v is connected to node v if and only if there is a path from v to v . Since edges represent undercut relations, the notion of connectedness between nodes captures the influence that one argument may have on another: Proposition 3.1. Given an argumentation graph AG, if there is any argument A, denoted by node v that affects the status of another argument A , denoted by v , then v is connected to v . The converse does not hold. Proof. Given Definitions 2.5 and 2.6, the only ways in which A can affect the status of A is if A either undercuts A , or if A undercuts some argument A that undercuts A , or if A undercuts some A that undercuts some A that undercuts A , and so on. In all such cases, a sequence of undercut relations relates the two arguments, and if they are both in an argumentation graph, this means that they are connected. Since the notion of path ignores the direction of the directed arcs, nodes v and v are connected whether the edge between them runs from v to v or vice versa. Since A only undercuts A if the edge runs from v to v , we cannot infer that A will affect the status of A from information about whether or not they are connected. The reason that we need the concept of the argumentation graph is that the properties of the argumentation graph tell us something about the set of arguments A the graph represents. When that set of arguments is constructed through a dialogue, there is a relationship between the structure of the argumentation graph and the protocol that governs the dialogue. It is the extent of the relationship between structure and protocol that is the main subject of this paper. To study this relationship, we need to establish a correspondence between a dialogue and an argumentation graph. Given the definitions we have so far, this is simple: Definition 3.6. A dialogue D, consisting of a sequence of moves mn, and an argument graph AG = (V, E) correspond to one another iff ∀m ∈ mn, the argument Ai that 1008 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) is advanced at move mi is represented by exactly one node v ∈ V , and ∀v ∈ V , v represents exactly one argument Ai that has been advanced by a move m ∈ mn. Thus a dialogue corresponds to an argumentation graph if and only if every argument made in the dialogue corresponds to a node in the graph, and every node in the graph corresponds to an argument made in the dialogue. This one-toone correspondence allows us to consider each node v in the graph to have an index i which is the index of the move in the dialogue that put forward the argument which that node represents. Thus we can, for example, refer to the third node in the argumentation graph, meaning the node that represents the argument put forward in the third move of the dialogue. 4. RELEVANCE Most work on dialogues is concerned with what we might call coherent dialogues, that is dialogues in which the participants are, as in the work of Walton and Krabbe [27], focused on resolving some question through the dialogue2 To capture this coherence, it seems we need a notion of relevance to constrain the statements made by agents. Here we study three notions of relevance: Definition 4.1. Consider a dialogue D, consisting of a sequence of moves mi, with a corresponding argument graph AG. The move mi+1, i > 1, is said to be relevant if one or more of the following hold: R1 Making mi+1 will change the status of the argument denoted by the first node of AG. R2 Making mi+1 will add a node vi+1 that is connected to the first node of AG. R3 Making mi+1 will add a node vi+1 that is connected to the last node to be added to AG. R2-relevance is the form of relevance defined by [3] in their study of strategic and tactical reasoning3 . R1-relevance was suggested by the notion used in [15], and though it differs somewhat from that suggested there, we believe it captures the essence of its predecessor. Note that we only define relevance for the second move of the dialogue onwards because the first move is taken to identify the subject of the dialogue, that is, the central question that the dialogue is intended to answer, and hence it must be relevant to the dialogue, no matter what it is. In assuming this, we focus our attention on the same kind of dialogues as [18]. We can think of relevance as enforcing a form of parsimony on a dialogue - it prevents agents from making statements that do not bear on the current state of the dialogue. This promotes efficiency, in the sense of limiting the number of moves in the dialogue, and, as in [15], prevents agents revealing information that they might better keep hidden. Another form of parsimony is to insist that agents are not allowed to put forward arguments that will be undercut by arguments that have already been made during the dialogue. We therefore distinguish such arguments. 2 See [11, 12] for examples of dialogues where this is not the case. 3 We consider such reasoning sub-types of rhetoric. Definition 4.2. Consider a dialogue D, consisting of a sequence of moves mi, with a corresponding argument graph AG. The move mi+1 and the argument it puts forward, Ai+1, are both said to be pre-empted, if Ai+1 is undercut by some A ∈ Ai. We use the term pre-empted because if such an argument is put forward, it can seem as though another agent anticipated the argument being made, and already made an argument that would render it useless. In the rest of this paper, we will only deal with protocols that permit moves that are relevant, in any of the senses introduced above, and are not allowed to be pre-empted. We call such protocols basic protocols, and dialogues carried out under such protocols basic dialogues. The argument graph of a basic dialogue is somewhat restricted. Proposition 4.1. Consider a basic dialogue D. The argumentation graph AG that corresponds to D is a tree with a single root. Proof. Recall that Definition 3.3 requires only that AG be a directed graph. To show that it is a tree, we have to show that it is acyclic and connected. That the graph is connected follows from the construction of the graph under a protocol that enforces relevance. If the notion of relevance is R3, each move adds a node that is connected to the previous node. If the notion of relevance is R2, then every move adds a node that is connected to the root, and thus is connected to some node in the graph. If the notion of relevance is R1, then every move has to change the status of the argument denoted by the root. Proposition 3.1 tells us that to affect the status of an argument A , the node v representing the argument A that is effecting the change has to be connected to v , the node representing A , and so it follows that every new node added as a result of an R1relevant move will be connected to the argumentation graph. Thus AG is connected. Since a basic dialogue does not allow moves that are preempted, every edge that is added during construction is directed from the node that is added to one already in the graph (thus denoting that the argument A denoted by the added node, v, undercuts the argument A denoted by the node to which the connection is made, v , rather than the other way around). Since every edge that is added is directed from the new node to the rest of the graph, there can be no cycles. Thus AG is a tree. To show that AG has a single root, consider its construction from the initial node. After m1 the graph has one node, v1 that is both a root and a leaf. After m2, the graph is two nodes connected by an edge, and v1 is now a root and not a leaf. v2 is a leaf and not a root. However the third node is added, the argument earlier in this proof demonstrates that there will be a directed edge from it to some other node, making it a leaf. Thus v1 will always be the only root. The ruling out of pre-empted moves means that v1 will never cease to be a root, and so the argumentation graph will always have one root. Since every argumentation graph constructed by a basic dialogue is a tree with a single root, this means that the first node of every argumentation graph is the root. Although these results are straightforward to obtain, they allow us to show how the notions of relevance are related. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1009 Proposition 4.2. Consider a basic dialogue D, consisting of a sequence of moves mi, with a corresponding argument graph AG. 1. Every move mi+1 that is R1-relevant is R2-relevant. The converse does not hold. 2. Every move mi+1 that is R3-relevant is R2-relevant. The converse does not hold. 3. Not every move mi+1 that is R1-relevant is R3-relevant, and not every move mi+1 that is R3-relevant is R1relevant Proof. For 1, consider how move mi+1 can satisfy R1. Proposition 3.1 tells us that if Ai+1 can change the status of the argument denoted by the root v1 (which, as observed above, is the first node) of AG, then vi+1 must be connected to the root. This is precisely what is required to satisfy R2, and the relatiosnhip is proved to hold. To see that the converse does not hold, we have to consider what it takes to change the status of r (since Proposition 3.1 tells us that connectedness is not enough to ensure a change of status - if it did, R1 and R2 relevance would coincide). For mi+1 to change the status of the root, it will have to (1) make the argument A represented by r either unacceptable, if it were acceptable before the move, or (2) acceptable if it were unacceptable before the move. Given the definition of acceptability, it can achieve (1) either by directly undercutting the argument represented by r, in which case vi+1 will be directly connected to r by some edge, or by undercutting some argument A that is part of the set of non-undercut arguments defending A. In the latter case, vi+1 will be directly connected to the node representing A and by Proposition 4.1 to r. To achieve (2), vi+1 will have to undercut an argument A that is either currently undercutting A, or is undercutting an argument that would otherwise defend A. Now, further consider that mi+1 puts forward an argument Ai+1 that undercuts the argument denoted by some node v , but this latter argument defends itself against Ai+1. In such a case, the set of acceptable arguments will not change, and so the status of Ar will not change. Thus a move that is R2-relevant need not be R1-relevant. For 2, consider that mi+1 can satisfy R3 simply by adding a node that is connected to vi, the last node to be added to AG. By Proposition 4.1, it is connected to r and so is R2-relevant. To see that the converse does not hold, consider that an R2-relevant move can connect to any node in AG. The first part of 3 follows by a similar argument to that we just used - an R1-relevant move does not have to connect to vi, just to some v that is part of the graph - and the second part follows since a move that is R3-relevant may introduce an argument Ai+1 that undercuts the argument Ai put forward by the previous move (and so vi+1 is connected to vi), but finds that Ai defends itself against Ai+1, preventing a change of status at the root. What is most interesting is not so much the results but why they hold, since this reveals some aspects of the interplay between relevance and the structure of argument graphs. For example, to restate a case from the proof of Proposition 4.2, a move that is R3-relevant by definition has to add a node to the argument graph that is connected to the last node that was added. Since a move that is R2-relevant can add a node that connects anywhere on an argument graph, any move that is R3-relevant will be R2-relevant, but the converse does not hold. It turns out that we can exploit the interplay between structure and relevance that Propositions 4.1 and 4.2 have started to illuminate to establish relationships between the protocols that govern dialogues and the argument graphs constructed during such dialogues. To do this we need to define protocols in such a way that they refer to the structure of the graph. We have: Definition 4.3. A protocol is single-path if all dialogues that conform to it construct argument graphs that have only one branch. Proposition 4.3. A basic protocol P is single-path if, for all i, the set of permitted moves Mi at move i are all R3relevant. The converse does not hold. Proof. R3-relevance requires that every node added to the argument graph be connected to the previous node. Starting from the first node this recursively constructs a tree with just one branch, and the relationship holds. The converse does not hold because even if one or more moves in the protocol are R1- or R2-relevant, it may be the case that, because of an agent``s rhetorical choice or because of its knowledge, every argument that is chosen to be put forward will undercut the previous argument and so the argument graph is a one-branch tree. Looking for more complex kinds of protocol that construct more complex kinds of argument graph, it is an obvious move to turn to: Definition 4.4. A basic protocol is multi-path if all dialogues that conform to it can construct argument graphs that are trees. But, on reflection, since any graph with only one branch is also a tree: Proposition 4.4. Any single-path protocol is an instance of a multi-path protocol. and, furthermore: Proposition 4.5. Any basic protocol P is multi-path. Proof. Immediate from Proposition 4.1 So the notion of a multi-path protocol does not have much traction. As a result we distinguish multi-path protocols that permit dialogues that can construct trees that have more than one branch as bushy protocols. We then have: Proposition 4.6. A basic protocol P is bushy if, for some i, the set of permitted moves Mi at move i are all R1- or R2-relevant. Proof. From Proposition 4.3 we know that if all moves are R3-relevant then we``ll get a tree with one branch, and from Proposition 4.1 we know that all basic protocols will build an argument graph that is a tree, so providing we exclude R3-relevant moves, we will get protocols that can build multi-branch trees. 1010 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Of course, since, by Proposition 4.2, any move that is R3relevant is R2-relevant and can quite possibly be R1-relevant (all that Proposition 4.2 tells us is that there is no guarantee that it will be), all that Proposition 4.6 tells us is that dialogues that conform to bushy protocols may have more than one branch. All we can do is to identify a bound on the number of branches: Proposition 4.7. Consider a basic dialogue D that includes m moves that are not R3-relevant, and has a corresponding argumentation graph AG. The number of branches in AG is less than or equal to m + 1. Proof. Since it must connect a node to the last node added to AG, an R3-relevant move can only extend an existing branch. Since they do not have the same restriction, R1 and R2-relevant moves may create a new branch by connecting to a node that is not the last node added. Every such move could create a new branch, and if they do, we will have m branches. If there were R3-relevant moves before any of these new-branch-creating moves, then these m branches are in addition to the initial branch created by the R3-relevant moves, and we have a maximum of m + 1 possible branches. We distinguish bushy protocols from multi-path protocols, and hence R1- and R2-relevance from R3-relevance, because of the kinds of dialogue that R3-relevance enforces. In a dialogue in which all moves must be R3-relevant, the argumentation graph has a single branch - the dialogue consists of a sequence of arguments each of which undercuts the previous one and the last move to be made is the one that settles the dialogue. This, as we will see next, means that such a dialogue only allows a subset of all the moves that would otherwise be possible. 5. COMPLETENESS The above discussion of the difference between dialogues carried out under single-path and bushy protocols brings us to the consideration of what [18] called predeterminism, but we now prefer to describe using the term completeness. The idea of predeterminism, as described in [18], captures the notion that, under some circumstances, the result of a dialogue can be established without actually having the dialogue - the agents have sufficiently little room for rhetorical manoeuver that were one able to see the contents of all the Σi of all the αi ∈ A, one would be able to identify the outcome of any dialogue on a given subject4 . We develop this idea by considering how the argument graphs constructed by dialogues under different protocols compare to benchmark complete dialogues. We start by developing ideas of what complete might mean. One reasonable definition is that: Definition 5.1. A basic dialogue D between the set of agents A with a corresponding argumentation graph AG is topic-complete if no agent can construct an argument A that undercuts any argument A represented by a node in AG. The argumentation graph constructed by a topic-complete dialogue is called a topic-complete argumentation graph and is denoted AG(D)T . 4 Assuming that the Σi do not change during the dialogue, which is the usual assumption in this kind of dialogue. A dialogue is topic-complete when no agent can add anything that is directly connected to the subject of the dialogue. Some protocols will prevent agents from making moves even though the dialogue is not topic-complete. To distinguish such cases we have: Definition 5.2. A basic dialogue D between the set of agents A with a corresponding argumentation graph AG is protocol-complete under a protocol P if no agent can make a move that adds a node to the argumentation graph that is permitted by P. The argumentation graph constructed by a protocol-complete dialogue is called a protocol-complete argumentation graph and is denoted AG(D)P . Clearly: Proposition 5.1. Any dialogue D under a basic protocol P is protocol-complete if it is topic-complete. The converse does not hold in general. Proof. If D is topic-complete, no agent can make a move that will extend the argumentation graph. This means that no agent can make a move that is permitted by a basic protocol, and so D is also protocol complete. The converse does not hold since some basic dialogues (under a protocol that only permits R3-relevant moves, for example) will not permit certain moves (like the addition of a node that connects to the root of the argumentation graph after more than two moves) that would be allowed in a topiccomplete dialogue. Corollary 5.1. For a basic dialogue D, AG(D)P is a sub-graph of AG(D)T . Obviously, from the definition of a sub-graph, the converse of Corollary 5.1 does not hold in general. The important distinction between topic- and protocolcompleteness is that the former is determined purely by the state of the dialogue - as captured by the argumentation graph - and is thus independent of the protocol, while the latter is determined entirely by the protocol. Any time that a dialogue ends in a state of protocol-completeness rather than topic completeness, it is ending when agents still have things to say but can``t because the protocol won``t allow them to. With these definitions of completeness, our task is to relate topic-completeness - the property that ensures that agents can say everything that they have to say in a dialogue that is, in some sense, important - to the notions of relevance we have developed - which determine what agents are allowed to say. When we need very specific conditions to make protocol-complete dialogues topic-complete, it means that agents have lots of room for rhetorical maneouver when those conditions are not in force. That is there are many ways they can bring dialogues to a close before everything that can be said has been said. Where few conditions are required, or conditions are absent, then dialogues between agents with the same knowledge will always play out the same way, and rhetoric has no place. We have: Proposition 5.2. A protocol-complete basic dialogue D under a protocol which only allows R3-relevant moves will be topic-complete only when AG(D)T has a single branch in which the nodes are labelled in increasing order from the root. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1011 Proof. Given what we know about R3-relevance, the condition on AG(D)P having a single branch is obvious. This is not a sufficient condition on its own because certain protocols may prevent - through additional restrictions, like strict turn-taking in a multi-party dialogue - all the nodes in AG(D)T , which is not subject to such restrictions, being added to the graph. Only when AG(D)T includes the nodes in the exact order that the corresponding arguments are put forward is it necessary that a topic-complete argumentation graph be constructed. Given Proposition 5.1, these are the conditions under which dialogues conducted under the notion of R3-relevance will always be predetermined, and given how restrictive the conditions are, such dialogues seem to have plenty of room for rhetoric to play a part. To find similar conditions for dialogues composed of R1and R2-relevant moves, we first need to distinguish between them. We can do this in terms of the structure of the argumentation graph: Proposition 5.3. Consider a basic dialogue D, with argumentation graph AG which has root r denoting an argument A. If argument A , denoted by node v is an an R2relevant move m, m is not R1-relevant if and only if: 1. there are two nodes v and v on the path between v and r, and the argument denoted by v defends itself against the argument denoted by v ; or 2. there is an argument A , denoted by node v , that affects the status of A, and the path from v to r has one or more nodes in common with the path from v to r. Proof. For the first condition, consider that since AG is a tree, v is connected to r. Thus there is a series of undercut relations between A and A , and this corrresponds to a path through AG. If this path is the only branch in the tree, then A will affect the status of A unless the chain of affect is broken by an undercut that can``t change the status of the undercut argument because the latter defends itself. For the second condition, as for the first, the only way that A cannot affect the status of A is if something is blocking its influence. If this is not due to defending against, it must be because there is some node u on the path that represents an argument whose status is fixed somehow, and that must mean that there is another chain of undercut relations, another branch of the tree, that is incident at u. Since this second branch denotes another chain of arguments, and these affect the status of the argument denoted by u, they must also affect the status of A. Any of these are the A in the condition. So an R2-relevant move m is not R1-relevant if either its effect is blocked because an argument upstream is not strong enough, or because there is another line of argument that is currently determining the status of the argument at the root. This, in turn, means that if the effect is not due to defending against, then there is an alternative move that is R1-relevant - a move that undercuts A in the second condition above5 . We can now show 5 Though whether the agent in question can make such a move is another question. Proposition 5.4. A protocol-complete basic dialogue D will always be topic-complete under a protocol which only includes R2-relevant moves and allows every R2-relevant move to be made. The restriction on R2-relevant rules is exactly that for topiccompleteness, so a dialogue that has only R2-relevant moves will continue until every argument that any agent can make has been put forward. Given this, and what we revealed about R1-relevance in Proposition 5.3, we can see that: Proposition 5.5. A protocol-complete basic dialogue D under a protocol which only includes R1-relevant moves will be topic-complete if AG(D)T : 1. includes no path with adjacent nodes v, denoting A, and v , denoting A , such that A undercuts A and A is stronger that A; and 2. is such that the nodes in every branch have consecutive indices and no node with degree greater than two is an odd number of arcs from a leaf node. Proof. The first condition rules out the first condition in Proposition 5.3, and the second deals with the situation that leads to the second condition in Proposition 5.3. The second condition ensures that each branch is constructed in full before any new branch is added, and when a new branch is added, the argument that is undercut as part of the addition will be acceptable, and so the addition will change the status of the argument denoted by that node, and hence the root. With these conditions, every move required to construct AG(D)T will be permitted and so the dialogue will be topic-complete when every move has been completed. The second part of this result only identifies one possible way to ensure that the second condition in Proposition 5.3 is met, so the converse of this result does not hold. However, what we have is sufficient to answer the question about predetermination that we started with. For dialogues to be predetermined, every move that is R2-relevant must be made. In such cases every dialogue is topic complete. If we do not require that all R2-relevant moves are made, then there is some room for rhetoric - the way in which alternative lines of argument are presented becomes an issue. If moves are forced to be R3-relevant, then there is considerable room for rhetorical play. 6. SUMMARY This paper has studied the different ideas of relevance in argumentation-based dialogue, identifying the relationship between these ideas, and showing how they can impact the extent to which the way that agents choose moves in a dialogue - what some authors have called the strategy and tactics of a dialogue. This extends existing work on relvance, such as [3, 15] by showing how different notions of relevance can have an effect on the outcome of a dialogue, in particular when they render the outcome predetermined. This connection extends the work of [18] which considered dialogue outcome, but stopped short of identifying the conditions under which it is predetermined. There are two ways we are currently trying to extend this work, both of which will generalise the results and extend its applicability. First, we want to relax the restrictions that 1012 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) we have imposed, the exclusion of moves that attack several arguments (without which the argument graph can be mulitply-connected) and the exclusion of pre-empted moves, without which the argument graph can have cycles. Second, we want to extend the ideas of relevance to cope with moves that do not only add undercutting arguments, but also supporting arguments, thus taking account of bipolar argumentation frameworks [5]. Acknowledgments The authors are grateful for financial support received from the EC, through project IST-FP6-002307, and from the NSF under grants REC-02-19347 and NSF IIS-0329037. They are also grateful to Peter Stone for a question, now several years old, which this paper has finally answered. 7. REFERENCES [1] L. Amgoud and C. Cayrol. On the acceptability of arguments in preference-based argumentation framework. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, pages 1-7, 1998. [2] L. Amgoud, S. Parsons, and N. Maudet. Arguments, dialogue, and negotiation. In W. Horn, editor, Proceedings of the Fourteenth European Conference on Artificial Intelligence, pages 338-342, Berlin, Germany, 2000. IOS Press. [3] J. Bentahar, M. Mbarki, and B. Moulin. Strategic and tactic reasoning for communicating agents. In N. Maudet, I. Rahwan, and S. Parsons, editors, Proceedings of the Third Workshop on Argumentation in Muliagent Systems, Hakodate, Japan, 2006. [4] P. Besnard and A. Hunter. A logic-based theory of deductive arguments. Artificial Intelligence, 128:203-235, 2001. [5] C. Cayrol, C. Devred, and M.-C. Lagasquie-Schiex. Handling controversial arguments in bipolar argumentation frameworks. In P. E. Dunne and T. J. M. Bench-Capon, editors, Computational Models of Argument: Proceedings of COMMA 2006, pages 261-272. IOS Press, 2006. [6] B. Chaib-Draa and F. Dignum. Trends in agent communication language. Computational Intelligence, 18(2):89-101, 2002. [7] F. Dignum, B. Dunin-K¸eplicz, and R. Verbrugge. Agent theory for team formation by dialogue. In C. Castelfranchi and Y. Lesp´erance, editors, Seventh Workshop on Agent Theories, Architectures, and Languages, pages 141-156, Boston, USA, 2000. [8] P. M. Dung. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77:321-357, 1995. [9] P. M. Dung, R. A. Kowalski, and F. Toni. Dialectic proof procedures for assumption-based, admissable argumentation. Artificial Intelligence, 170(2):114-159, 2006. [10] R. A. Flores and R. C. Kremer. To commit or not to commit. Computational Intelligence, 18(2):120-173, 2002. [11] D. M. Gabbay and J. Woods. More on non-cooperation in Dialogue Logic. Logic Journal of the IGPL, 9(2):321-339, 2001. [12] D. M. Gabbay and J. Woods. Non-cooperation in Dialogue Logic. Synthese, 127(1-2):161-186, 2001. [13] C. L. Hamblin. Mathematical models of dialogue. Theoria, 37:130-155, 1971. [14] S. Kraus, K. Sycara, and A. Evenchik. Reaching agreements through argumentation: a logical model and implementation. Artificial Intelligence, 104(1-2):1-69, 1998. [15] N. Oren, T. J. Norman, and A. Preece. Loose lips sink ships: A heuristic for argumentation. In N. Maudet, I. Rahwan, and S. Parsons, editors, Proceedings of the Third Workshop on Argumentation in Muliagent Systems, Hakodate, Japan, 2006. [16] S. Parsons and N. R. Jennings. Negotiation through argumentation - a preliminary report. In Proceedings of Second International Conference on Multi-Agent Systems, pages 267-274, 1996. [17] S. Parsons, M. Wooldridge, and L. Amgoud. An analysis of formal inter-agent dialogues. In 1st International Conference on Autonomous Agents and Multi-Agent Systems. ACM Press, 2002. [18] S. Parsons, M. Wooldridge, and L. Amgoud. On the outcomes of formal inter-agent dialogues. In 2nd International Conference on Autonomous Agents and Multi-Agent Systems. ACM Press, 2003. [19] H. Prakken. On dialogue systems with speech acts, arguments, and counterarguments. In Proceedings of the Seventh European Workshop on Logic in Artificial Intelligence, Berlin, Germany, 2000. Springer Verlag. [20] H. Prakken. Relating protocols for dynamic dispute with logics for defeasible argumentation. Synthese, 127:187-219, 2001. [21] H. Prakken and G. Sartor. Modelling reasoning with precedents in a formal dialogue game. Artificial Intelligence and Law, 6:231-287, 1998. [22] I. Rahwan, P. McBurney, and E. Sonenberg. Towards a theory of negotiation strategy. In I. Rahwan, P. Moraitis, and C. Reed, editors, Proceedings of the 1st International Workshop on Argumentation in Multiagent Systems, New York, NY, 2004. [23] C. Reed. Dialogue frames in agent communications. In Y. Demazeau, editor, Proceedings of the Third International Conference on Multi-Agent Systems, pages 246-253. IEEE Press, 1998. [24] M. Rovatsos, I. Rahwan, F. Fisher, and G. Weiss. Adaptive strategies for practical argument-based negotiation. In I. Rahwan, P. Moraitis, and C. Reed, editors, Proceedings of the 1st International Workshop on Argumentation in Multiagent Systems, New York, NY, 2004. [25] M. Schroeder, D. A. Plewe, and A. Raab. Ultima ratio: should Hamlet kill Claudius. In Proceedings of the 2nd International Conference on Autonomous Agents, pages 467-468, 1998. [26] K. Sycara. Argumentation: Planning other agents'' plans. In Proceedings of the Eleventh Joint Conference on Artificial Intelligence, pages 517-523, 1989. [27] D. N. Walton and E. C. W. Krabbe. Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning. State University of New York Press, Albany, NY, USA, 1995. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1013
On the relevance of utterances in formal inter-agent dialogues ABSTRACT Work on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues. This work has established the space in which agents are permitted to interact through dialogues. Recently, there has been increasing interest in the mechanisms agents might use to choose how to act--the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue. Key in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks. Here we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring. 1. INTRODUCTION Finding ways for agents to reach agreements in multiagent systems is an area of active research. One mechanism for achieving agreement is through the use of argumentation--where one agent tries to convince another agent of something during the course of some dialogue. Early examples of argumentation-based approaches to multiagent agreement include the work of Dignum et al. [7], Kraus [14], Parsons and Jennings [16], Reed [23], Schroeder et al. [25] and Sycara [26]. The work of Walton and Krabbe [27], popularised in the multiagent systems community by Reed [23], has been particularly influential in the field of argumentation-based dialogue. This work influenced the field in a number of ways, perhaps most deeply in framing multi-agent interactions as dialogue games in the tradition of Hamblin [13]. Viewing dialogues in this way, as in [2, 21], provides a powerful framework for analysing the formal properties of dialogues, and for identifying suitable protocols under which dialogues can be conducted [18, 20]. The dialogue game view overlaps with work on conversation policies (see, for example, [6, 10]), but differs in considering the entire dialogue rather than dialogue segments. In this paper, we extend the work of [18] by considering the role of relevance--the relationship between utterances in a dialogue. Relevance is a topic of increasing interest in argumentation-based dialogue because it relates to the scope that an agent has for applying strategic manoeuvering to obtain the outcomes that it requires [19, 22, 24]. Our work identifes the limits on such rhetorical manoeuvering, showing when it can and cannot have an effect. 2. BACKGROUND We begin by introducing the formal system of argumentation that underpins our approach, as well as the corresponding terminology and notation, all taken from [2, 8, 17]. A dialogue is a sequence of messages passed between two or more members of a set of agents A. An agent α maintains a knowledge base, Σ., containing formulas of a propositional language L and having no deductive closure. Agent α also maintains the set of its past utterances, called the "commitment store", CS. . We refer to this as an agent's "public knowledge", since it contains information that is shared with other agents. In contrast, the contents of Σ. are "private" to α. Note that in the description that follows, we assume that ♦ is the classical inference relation, that ≡ stands for logical equivalence, and we use Δ to denote all the information available to an agent. Thus in a dialogue between two agents α and β, Δ. = Σ. ∪ CS. ∪ CSp, so the commitment store CS. can be loosely thought of as a subset of Δ. consisting of the assertions that have been made public. In some dialogue games, such as those in [18] anything in CS. is either in Σ. or can be derived from it. In other dialogue games, such as 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS those in [2], CSα may contain things that cannot be derived from Σα. S is called the support of A, written S = Support (A) and p is the conclusion of A, written p = Conclusion (A). Thus we talk of p being supported by the argument (S, p). In general, since Δ may be inconsistent, arguments in A (Δ), the set of all arguments which can be made from Δ, may conflict, and we make this idea precise with the notion of undercutting: DEFINITION 2.2. Let A1 and A2 be arguments in A (Δ). A1 undercuts A2 iff ∃ ¬ p ∈ Support (A2) such that p ≡ Conclusion (A1). In other words, an argument is undercut if and only if there is another argument which has as its conclusion the negation of an element of the support for the first argument. To capture the fact that some beliefs are more strongly held than others, we assume that any set of beliefs has a preference order over it. We consider all information available to an agent, Δ, to be stratified into non-overlapping subsets Δ1,..., Δn such that beliefs in Δi are all equally preferred and are preferred over elements in Δj where i> j. The preference level of a nonempty subset S ⊂ Δ, where different elements s ∈ S may belong to different layers Δi, is valued at the highest numbered layer which has a member in S and is referred to as level (S). In other words, S is only as strong as its weakest member. Note that the strength of a belief as used in this context is a separate concept from the notion of support discussed earlier. DEFINITION 2.3. Let A1 and A2 be arguments in A (Δ). A1 is preferred to A2 according to Pref, A1" Pref A2, iff level (Support (A1))> level (Support (A2)). If A1 is preferred to A2, we say that A1 is stronger than A2. We can now define the argumentation system we will use: DEFINITION 2.4. An argumentation system is a triple: (A (Δ), Undercut, Pref) such that: • A (Δ) is a set of the arguments built from Δ, • Undercut is a binary relation representing the defeat relationship between arguments, Undercut ⊆ A (Δ) × A (Δ), and • Pref is a pre-ordering on A (Δ) × A (Δ). The preference order makes it possible to distinguish different types of relations between arguments: • If A2 undercuts A1 then A1 defends itself against A2 iff A1" Pref A2. Otherwise, A1 does not defend itself. • A set of arguments A defends A1 iff for every A2 that undercuts A1, where A1 does not defend itself against A2, then there is some A3 ∈ A such that A3 undercuts A2 and A2 does not defend itself against A3. We write AUndercut, Pref to denote the set of all non-undercut arguments and arguments defending themselves against all their undercutting arguments. The set A (Δ) of acceptable arguments of the argumentation system (A (Δ), Undercut, Pref) is [1] the least fixpoint of a function F: An argument is acceptable if it is a member of the acceptable set, and a proposition is acceptable if it is the conclusion of an acceptable argument. An acceptable argument is one which is, in some sense, proven since all the arguments which might undermine it are themselves undermined. 3. DIALOGUES Systems like those described in [2, 18], lay down sets of locutions that agents can make to put forward propositions and the arguments that support them, and protocols that define precisely which locutions can be made at which points in the dialogue. We are not concerned with such a level of detail here. Instead we are interested in the interplay between arguments that agents put forth. As a result, we will consider only that agents are allowed to put forward arguments. We do not discuss the detail of the mechanism that is used to put these arguments forward--we just assume that arguments of the form (S, p) are inserted into an agent's commitment store where they are then visible to other agents. We then have a typical definition of a dialogue: DEFINITION 3.1. A dialogue D is a sequence of moves: m1, m2,..., mn. A given move mi is a pair (α, Ai) where Ai is an argument that α places into its commitment store CSα. Moves in an argumentation-based dialogue typically attack moves that have been made previously. While, in general, a dialogue can include moves that undercut several arguments, in the remainder of this paper, we will only consider dialogues that put forward moves that undercut at most one argument. For now we place no additional constraints on the moves that make up a dialogue. Later we will see how different restrictions on moves lead to different kinds of dialogue. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1007 The sequence of arguments put forward in the dialogue is determined by the agents who are taking part in the dialogue, but they are usually not completely free to choose what arguments they make. As indicated earlier, their choice is typically limited by a protocol. If we write the sequence of n moves m1, m2,..., mn as ~ mn, and denote the empty sequence as ~ m0, then we can define a profocol in the following way: DEFINITION 3.2. A protocol P is a function on a sequence of moves ~ mi in a dialogue D that, for all i> 0, identifies a set of possible moves Mi +1 from which the mi +1 th move may be drawn: In other words, for our purposes here, at every point in a dialogue, a protocol determines a set of possible moves that agents may make as part of the dialogue. If a dialogue D always picks its moves m from the set M identified by protocol P, then D is said to conform to P. Even if a dialogue conforms to a protocol, it is typically the case that the agent engaging in the dialogue has to make a choice of move--it has to choose which of the moves in M to make. This excercise of choice is what we refer to as an agent's use of rhetoric (in its oratorical sense of "influencing the thought and conduct of an audience"). Some of our results will give a sense of how much scope an agent has to exercise rhetoric under different protocols. As arguments are placed into commitment stores, and hence become public, agents can determine the relationships between them. In general, after several moves in a dialogue, some arguments will undercut others. We will denote the set of arguments {A1, A2,..., Aj} asserted after moves m1, m2,..., mj of a dialogue to be Aj--the relationship of the arguments in Aj can be described as an argumentation graph, similar to those described in, for example, [3, 4, 9]: We will use the term argument graph as a synonym for "argumentation graph". Note that we do not require that the argumentation graph is connected. In other words the notion of an argumentation graph allows for the representation of arguments that do not relate, by undercutting or being undercut, to any other arguments (we will come back to this point very shortly). We adapt some standard graph theoretic notions in order to describe various aspects of the argumentation graph. If there is an edge e from vertex v to vertex v', then v is said to be the parent of v' and v' is said to be the child of v. In a reversal of the usual notion, we define a root of an argumentation graph1 as follows: DEFINITION 3.4. A root of an argumentation graph AG = (V, E) is a node v E V that has no children. Thus a root of a graph is a node to which directed edges may be connected, but from which no directed edges connect to other nodes. Thus a root is a node representing an 1Note that we talk of "a root" rather than "the root"--as defined, an argumentation graph need not be a tree. Figure 1: An example argument graph argument that is undercut, but which itself does no undercutting. Similarly: DEFINITION 3.5. A leaf of an argumentation graph AG = (V, E) is a node v E V that has no parents. Thus a leaf in an argumentation graph represents an argument that undercuts another argument, but does no undercutting. Thus in Figure 1, v is a root, and v' is a leaf. The reason for the reversal of the usual notions of root and leaf is that, as we shall see, we will consider dialogues to construct argumentation graphs from the roots (in our sense) to the leaves. The reversal of the terminology means that it matches the natural process of tree construction. Since, as described above, argumentation graphs are allowed to be not connected (in the usual graph theory sense), it is helpful to distinguish nodes that are connected to other nodes, in particular to the root of the tree. We say that node v is connected to node v' if and only if there is a path from v to v'. Since edges represent undercut relations, the notion of connectedness between nodes captures the influence that one argument may have on another: PROPOSITION 3.1. Given an argumentation graph AG, if there is any argument A, denoted by node v that affects the status of another argument A', denoted by v', then v is connected to v'. The converse does not hold. PROOF. Given Definitions 2.5 and 2.6, the only ways in which A can affect the status of A' is if A either undercuts A', or if A undercuts some argument A' ' that undercuts A', or if A undercuts some A' ' ' that undercuts some A' ' that undercuts A', and so on. In all such cases, a sequence of undercut relations relates the two arguments, and if they are both in an argumentation graph, this means that they are connected. Since the notion of path ignores the direction of the directed arcs, nodes v and v' are connected whether the edge between them runs from v to v' or vice versa. Since A only undercuts A' if the edge runs from v to v', we cannot infer that A will affect the status of A' from information about whether or not they are connected. The reason that we need the concept of the argumentation graph is that the properties of the argumentation graph tell us something about the set of arguments A the graph represents. When that set of arguments is constructed through a dialogue, there is a relationship between the structure of the argumentation graph and the protocol that governs the dialogue. It is the extent of the relationship between structure and protocol that is the main subject of this paper. To study this relationship, we need to establish a correspondence between a dialogue and an argumentation graph. Given the definitions we have so far, this is simple: of moves ~ mn, and an argument graph AG = (V, E) correspond to one another iff Vm E ~ mn, the argument Ai that 1008 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) is advanced at move mi is represented by exactly one node v E V, and Vv E V, v represents exactly one argument Ai that has been advanced by a move m E Imn. Thus a dialogue corresponds to an argumentation graph if and only if every argument made in the dialogue corresponds to a node in the graph, and every node in the graph corresponds to an argument made in the dialogue. This one-toone correspondence allows us to consider each node v in the graph to have an index i which is the index of the move in the dialogue that put forward the argument which that node represents. Thus we can, for example, refer to the "third node" in the argumentation graph, meaning the node that represents the argument put forward in the third move of the dialogue. 4. RELEVANCE Most work on dialogues is concerned with what we might call coherent dialogues, that is dialogues in which the participants are, as in the work of Walton and Krabbe [27], focused on resolving some question through the dialogue2 To capture this coherence, it seems we need a notion of relevance to constrain the statements made by agents. Here we study three notions of relevance: DEFINITION 4.1. Consider a dialogue D, consisting of a sequence of moves Imi, with a corresponding argument graph AG. The move mi +1, i> 1, is said to be relevant if one or more of the following hold: R1 Making mi +1 will change the status of the argument denoted by the first node of AG. R2 Making mi +1 will add a node vi +1 that is connected to the first node of AG. R3 Making mi +1 will add a node vi +1 that is connected to the last node to be added to AG. R2-relevance is the form of relevance defined by [3] in their study of strategic and tactical reasoning3. R1-relevance was suggested by the notion used in [15], and though it differs somewhat from that suggested there, we believe it captures the essence of its predecessor. Note that we only define relevance for the second move of the dialogue onwards because the first move is taken to identify the subject of the dialogue, that is, the central question that the dialogue is intended to answer, and hence it must be relevant to the dialogue, no matter what it is. In assuming this, we focus our attention on the same kind of dialogues as [18]. We can think of relevance as enforcing a form of parsimony on a dialogue--it prevents agents from making statements that do not bear on the current state of the dialogue. This promotes efficiency, in the sense of limiting the number of moves in the dialogue, and, as in [15], prevents agents revealing information that they might better keep hidden. Another form of parsimony is to insist that agents are not allowed to put forward arguments that will be undercut by arguments that have already been made during the dialogue. We therefore distinguish such arguments. We use the term "pre-empted" because if such an argument is put forward, it can seem as though another agent anticipated the argument being made, and already made an argument that would render it useless. In the rest of this paper, we will only deal with protocols that permit moves that are relevant, in any of the senses introduced above, and are not allowed to be pre-empted. We call such protocols basic protocols, and dialogues carried out under such protocols basic dialogues. The argument graph of a basic dialogue is somewhat restricted. PROOF. Recall that Definition 3.3 requires only that AG be a directed graph. To show that it is a tree, we have to show that it is acyclic and connected. That the graph is connected follows from the construction of the graph under a protocol that enforces relevance. If the notion of relevance is R3, each move adds a node that is connected to the previous node. If the notion of relevance is R2, then every move adds a node that is connected to the root, and thus is connected to some node in the graph. If the notion of relevance is R1, then every move has to change the status of the argument denoted by the root. Proposition 3.1 tells us that to affect the status of an argument A', the node v representing the argument A that is effecting the change has to be connected to v', the node representing A', and so it follows that every new node added as a result of an R1relevant move will be connected to the argumentation graph. Thus AG is connected. Since a basic dialogue does not allow moves that are preempted, every edge that is added during construction is directed from the node that is added to one already in the graph (thus denoting that the argument A denoted by the added node, v, undercuts the argument A' denoted by the node to which the connection is made, v', rather than the other way around). Since every edge that is added is directed from the new node to the rest of the graph, there can be no cycles. Thus AG is a tree. To show that AG has a single root, consider its construction from the initial node. After m1 the graph has one node, v1 that is both a root and a leaf. After m2, the graph is two nodes connected by an edge, and v1 is now a root and not a leaf. v2 is a leaf and not a root. However the third node is added, the argument earlier in this proof demonstrates that there will be a directed edge from it to some other node, making it a leaf. Thus v1 will always be the only root. The ruling out of pre-empted moves means that v1 will never cease to be a root, and so the argumentation graph will always have one root. Since every argumentation graph constructed by a basic dialogue is a tree with a single root, this means that the first node of every argumentation graph is the root. Although these results are straightforward to obtain, they allow us to show how the notions of relevance are related. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1009 1. Every move mi +1 that is R1-relevant is R2-relevant. The converse does not hold. 2. Every move mi +1 that is R3-relevant is R2-relevant. The converse does not hold. 3. Not every move mi +1 that is R1-relevant is R3-relevant, and not every move mi +1 that is R3-relevant is R1relevant PROOF. For 1, consider how move mi +1 can satisfy R1. Proposition 3.1 tells us that if Ai +1 can change the status of the argument denoted by the root v1 (which, as observed above, is the first node) of AG, then vi +1 must be connected to the root. This is precisely what is required to satisfy R2, and the relatiosnhip is proved to hold. To see that the converse does not hold, we have to consider what it takes to change the status of r (since Proposition 3.1 tells us that connectedness is not enough to ensure a change of status--if it did, R1 and R2 relevance would coincide). For mi +1 to change the status of the root, it will have to (1) make the argument A represented by r either unacceptable, if it were acceptable before the move, or (2) acceptable if it were unacceptable before the move. Given the definition of acceptability, it can achieve (1) either by directly undercutting the argument represented by r, in which case vi +1 will be directly connected to r by some edge, or by undercutting some argument A' that is part of the set of non-undercut arguments defending A. In the latter case, vi +1 will be directly connected to the node representing A' and by Proposition 4.1 to r. To achieve (2), vi +1 will have to undercut an argument A' ' that is either currently undercutting A, or is undercutting an argument that would otherwise defend A. Now, further consider that mi +1 puts forward an argument Ai +1 that undercuts the argument denoted by some node v', but this latter argument defends itself against Ai +1. In such a case, the set of acceptable arguments will not change, and so the status of Ar will not change. Thus a move that is R2-relevant need not be R1-relevant. For 2, consider that mi +1 can satisfy R3 simply by adding a node that is connected to vi, the last node to be added to AG. By Proposition 4.1, it is connected to r and so is R2-relevant. To see that the converse does not hold, consider that an R2-relevant move can connect to any node in AG. The first part of 3 follows by a similar argument to that we just used--an R1-relevant move does not have to connect to vi, just to some v that is part of the graph--and the second part follows since a move that is R3-relevant may introduce an argument Ai +1 that undercuts the argument Ai put forward by the previous move (and so vi +1 is connected to vi), but finds that Ai defends itself against Ai +1, preventing a change of status at the root. What is most interesting is not so much the results but why they hold, since this reveals some aspects of the interplay between relevance and the structure of argument graphs. For example, to restate a case from the proof of Proposition 4.2, a move that is R3-relevant by definition has to add a node to the argument graph that is connected to the last node that was added. Since a move that is R2-relevant can add a node that connects anywhere on an argument graph, any move that is R3-relevant will be R2-relevant, but the converse does not hold. It turns out that we can exploit the interplay between structure and relevance that Propositions 4.1 and 4.2 have started to illuminate to establish relationships between the protocols that govern dialogues and the argument graphs constructed during such dialogues. To do this we need to define protocols in such a way that they refer to the structure of the graph. We have: PROOF. R3-relevance requires that every node added to the argument graph be connected to the previous node. Starting from the first node this recursively constructs a tree with just one branch, and the relationship holds. The converse does not hold because even if one or more moves in the protocol are R1 - or R2-relevant, it may be the case that, because of an agent's rhetorical choice or because of its knowledge, every argument that is chosen to be put forward will undercut the previous argument and so the argument graph is a one-branch tree. Looking for more complex kinds of protocol that construct more complex kinds of argument graph, it is an obvious move to turn to: So the notion of a multi-path protocol does not have much traction. As a result we distinguish multi-path protocols that permit dialogues that can construct trees that have more than one branch as bushy protocols. We then have: 1010 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Of course, since, by Proposition 4.2, any move that is R3relevant is R2-relevant and can quite possibly be R1-relevant (all that Proposition 4.2 tells us is that there is no guarantee that it will be), all that Proposition 4.6 tells us is that dialogues that conform to bushy protocols may have more than one branch. All we can do is to identify a bound on the number of branches: We distinguish bushy protocols from multi-path protocols, and hence R1 - and R2-relevance from R3-relevance, because of the kinds of dialogue that R3-relevance enforces. In a dialogue in which all moves must be R3-relevant, the argumentation graph has a single branch--the dialogue consists of a sequence of arguments each of which undercuts the previous one and the last move to be made is the one that settles the dialogue. This, as we will see next, means that such a dialogue only allows a subset of all the moves that would otherwise be possible. 5. COMPLETENESS The above discussion of the difference between dialogues carried out under single-path and bushy protocols brings us to the consideration of what [18] called "predeterminism", but we now prefer to describe using the term "completeness". The idea of predeterminism, as described in [18], captures the notion that, under some circumstances, the result of a dialogue can be established without actually having the dialogue--the agents have sufficiently little room for rhetorical manoeuver that were one able to see the contents of all the Σi of all the αi ∈ A, one would be able to identify the outcome of any dialogue on a given subject4. We develop this idea by considering how the argument graphs constructed by dialogues under different protocols compare to benchmark complete dialogues. We start by developing ideas of what "complete" might mean. One reasonable definition is that: The argumentation graph constructed by a topic-complete dialogue is called a topic-complete argumentation graph and is denoted AG (D) T. 4Assuming that the Σi do not change during the dialogue, which is the usual assumption in this kind of dialogue. A dialogue is topic-complete when no agent can add anything that is directly connected to the subject of the dialogue. Some protocols will prevent agents from making moves even though the dialogue is not topic-complete. To distinguish such cases we have: The argumentation graph constructed by a protocol-complete dialogue is called a protocol-complete argumentation graph and is denoted AG (D) P. Clearly: Obviously, from the definition of a sub-graph, the converse of Corollary 5.1 does not hold in general. The important distinction between topic - and protocolcompleteness is that the former is determined purely by the state of the dialogue--as captured by the argumentation graph--and is thus independent of the protocol, while the latter is determined entirely by the protocol. Any time that a dialogue ends in a state of protocol-completeness rather than topic completeness, it is ending when agents still have things to say but can't because the protocol won't allow them to. With these definitions of completeness, our task is to relate topic-completeness--the property that ensures that agents can say everything that they have to say in a dialogue that is, in some sense, important--to the notions of relevance we have developed--which determine what agents are allowed to say. When we need very specific conditions to make protocol-complete dialogues topic-complete, it means that agents have lots of room for rhetorical maneouver when those conditions are not in force. That is there are many ways they can bring dialogues to a close before everything that can be said has been said. Where few conditions are required, or conditions are absent, then dialogues between agents with the same knowledge will always play out the same way, and rhetoric has no place. We have: The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1011 PROOF. Given what we know about R3-relevance, the condition on AG (D) P having a single branch is obvious. This is not a sufficient condition on its own because certain protocols may prevent--through additional restrictions, like strict turn-taking in a multi-party dialogue--all the nodes in AG (D) T, which is not subject to such restrictions, being added to the graph. Only when AG (D) T includes the nodes in the exact order that the corresponding arguments are put forward is it necessary that a topic-complete argumentation graph be constructed. Given Proposition 5.1, these are the conditions under which dialogues conducted under the notion of R3-relevance will always be predetermined, and given how restrictive the conditions are, such dialogues seem to have plenty of room for rhetoric to play a part. To find similar conditions for dialogues composed of R1and R2-relevant moves, we first need to distinguish between them. We can do this in terms of the structure of the argumentation graph: 1. there are two nodes v' and v' ' on the path between v and r, and the argument denoted by v' defends itself against the argument denoted by v' '; or 2. there is an argument A' ', denoted by node v' ', that affects the status of A, and the path from v' ' to r has one or more nodes in common with the path from v to r. PROOF. For the first condition, consider that since AG is a tree, v is connected to r. Thus there is a series of undercut relations between A and A', and this corrresponds to a path through AG. If this path is the only branch in the tree, then A will affect the status of A' unless the chain of "affect" is broken by an undercut that can't change the status of the undercut argument because the latter defends itself. For the second condition, as for the first, the only way that A' cannot affect the status of A is if something is blocking its influence. If this is not due to "defending against", it must be because there is some node u on the path that represents an argument whose status is fixed somehow, and that must mean that there is another chain of undercut relations, another branch of the tree, that is incident at u. Since this second branch denotes another chain of arguments, and these affect the status of the argument denoted by u, they must also affect the status of A. Any of these are the A' ' in the condition. So an R2-relevant move m is not R1-relevant if either its effect is blocked because an argument upstream is not strong enough, or because there is another line of argument that is currently determining the status of the argument at the root. This, in turn, means that if the effect is not due to "defending against", then there is an alternative move that is R1-relevant--a move that undercuts A' ' in the second condition above5. We can now show The restriction on R2-relevant rules is exactly that for topiccompleteness, so a dialogue that has only R2-relevant moves will continue until every argument that any agent can make has been put forward. Given this, and what we revealed about R1-relevance in Proposition 5.3, we can see that: PROPOSITION 5.5. A protocol-complete basic dialogue D under a protocol which only includes R1-relevant moves will be topic-complete if AG (D) T: 1. includes no path with adjacent nodes v, denoting A, and v', denoting A', such that A undercuts A' and A' is stronger that A; and 2. is such that the nodes in every branch have consecutive indices and no node with degree greater than two is an odd number of arcs from a leaf node. PROOF. The first condition rules out the first condition in Proposition 5.3, and the second deals with the situation that leads to the second condition in Proposition 5.3. The second condition ensures that each branch is constructed in full before any new branch is added, and when a new branch is added, the argument that is undercut as part of the addition will be acceptable, and so the addition will change the status of the argument denoted by that node, and hence the root. With these conditions, every move required to construct AG (D) T will be permitted and so the dialogue will be topic-complete when every move has been completed. The second part of this result only identifies one possible way to ensure that the second condition in Proposition 5.3 is met, so the converse of this result does not hold. However, what we have is sufficient to answer the question about "predetermination" that we started with. For dialogues to be predetermined, every move that is R2-relevant must be made. In such cases every dialogue is topic complete. If we do not require that all R2-relevant moves are made, then there is some room for rhetoric--the way in which alternative lines of argument are presented becomes an issue. If moves are forced to be R3-relevant, then there is considerable room for rhetorical play. 6. SUMMARY This paper has studied the different ideas of relevance in argumentation-based dialogue, identifying the relationship between these ideas, and showing how they can impact the extent to which the way that agents choose moves in a dialogue--what some authors have called the strategy and tactics of a dialogue. This extends existing work on relvance, such as [3, 15] by showing how different notions of relevance can have an effect on the outcome of a dialogue, in particular when they render the outcome predetermined. This connection extends the work of [18] which considered dialogue outcome, but stopped short of identifying the conditions under which it is predetermined. There are two ways we are currently trying to extend this work, both of which will generalise the results and extend its applicability. First, we want to relax the restrictions that 1012 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) we have imposed, the exclusion of moves that attack several arguments (without which the argument graph can be mulitply-connected) and the exclusion of pre-empted moves, without which the argument graph can have cycles. Second, we want to extend the ideas of relevance to cope with moves that do not only add undercutting arguments, but also supporting arguments, thus taking account of bipolar argumentation frameworks [5].
On the relevance of utterances in formal inter-agent dialogues ABSTRACT Work on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues. This work has established the space in which agents are permitted to interact through dialogues. Recently, there has been increasing interest in the mechanisms agents might use to choose how to act--the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue. Key in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks. Here we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring. 1. INTRODUCTION Finding ways for agents to reach agreements in multiagent systems is an area of active research. One mechanism for achieving agreement is through the use of argumentation--where one agent tries to convince another agent of something during the course of some dialogue. Early examples of argumentation-based approaches to multiagent agreement include the work of Dignum et al. [7], Kraus [14], Parsons and Jennings [16], Reed [23], Schroeder et al. [25] and Sycara [26]. The work of Walton and Krabbe [27], popularised in the multiagent systems community by Reed [23], has been particularly influential in the field of argumentation-based dialogue. This work influenced the field in a number of ways, perhaps most deeply in framing multi-agent interactions as dialogue games in the tradition of Hamblin [13]. Viewing dialogues in this way, as in [2, 21], provides a powerful framework for analysing the formal properties of dialogues, and for identifying suitable protocols under which dialogues can be conducted [18, 20]. The dialogue game view overlaps with work on conversation policies (see, for example, [6, 10]), but differs in considering the entire dialogue rather than dialogue segments. In this paper, we extend the work of [18] by considering the role of relevance--the relationship between utterances in a dialogue. Relevance is a topic of increasing interest in argumentation-based dialogue because it relates to the scope that an agent has for applying strategic manoeuvering to obtain the outcomes that it requires [19, 22, 24]. Our work identifes the limits on such rhetorical manoeuvering, showing when it can and cannot have an effect. 2. BACKGROUND We begin by introducing the formal system of argumentation that underpins our approach, as well as the corresponding terminology and notation, all taken from [2, 8, 17]. A dialogue is a sequence of messages passed between two or more members of a set of agents A. An agent α maintains a knowledge base, Σ., containing formulas of a propositional language L and having no deductive closure. Agent α also maintains the set of its past utterances, called the "commitment store", CS. . We refer to this as an agent's "public knowledge", since it contains information that is shared with other agents. In contrast, the contents of Σ. are "private" to α. Note that in the description that follows, we assume that ♦ is the classical inference relation, that ≡ stands for logical equivalence, and we use Δ to denote all the information available to an agent. Thus in a dialogue between two agents α and β, Δ. = Σ. ∪ CS. ∪ CSp, so the commitment store CS. can be loosely thought of as a subset of Δ. consisting of the assertions that have been made public. In some dialogue games, such as those in [18] anything in CS. is either in Σ. or can be derived from it. In other dialogue games, such as 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS 3. DIALOGUES The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1007 1008 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. RELEVANCE The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1009 1010 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. COMPLETENESS The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1011 6. SUMMARY This paper has studied the different ideas of relevance in argumentation-based dialogue, identifying the relationship between these ideas, and showing how they can impact the extent to which the way that agents choose moves in a dialogue--what some authors have called the strategy and tactics of a dialogue. This extends existing work on relvance, such as [3, 15] by showing how different notions of relevance can have an effect on the outcome of a dialogue, in particular when they render the outcome predetermined. This connection extends the work of [18] which considered dialogue outcome, but stopped short of identifying the conditions under which it is predetermined. There are two ways we are currently trying to extend this work, both of which will generalise the results and extend its applicability. First, we want to relax the restrictions that 1012 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) we have imposed, the exclusion of moves that attack several arguments (without which the argument graph can be mulitply-connected) and the exclusion of pre-empted moves, without which the argument graph can have cycles. Second, we want to extend the ideas of relevance to cope with moves that do not only add undercutting arguments, but also supporting arguments, thus taking account of bipolar argumentation frameworks [5].
On the relevance of utterances in formal inter-agent dialogues ABSTRACT Work on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues. This work has established the space in which agents are permitted to interact through dialogues. Recently, there has been increasing interest in the mechanisms agents might use to choose how to act--the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue. Key in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks. Here we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring. 1. INTRODUCTION Finding ways for agents to reach agreements in multiagent systems is an area of active research. One mechanism for achieving agreement is through the use of argumentation--where one agent tries to convince another agent of something during the course of some dialogue. Early examples of argumentation-based approaches to multiagent agreement The work of Walton and Krabbe [27], popularised in the multiagent systems community by Reed [23], has been particularly influential in the field of argumentation-based dialogue. This work influenced the field in a number of ways, perhaps most deeply in framing multi-agent interactions as dialogue games in the tradition of Hamblin [13]. Viewing dialogues in this way, as in [2, 21], provides a powerful framework for analysing the formal properties of dialogues, and for identifying suitable protocols under which dialogues can be conducted [18, 20]. The dialogue game view overlaps with work on conversation policies (see, for example, [6, 10]), but differs in considering the entire dialogue rather than dialogue segments. In this paper, we extend the work of [18] by considering the role of relevance--the relationship between utterances in a dialogue. Relevance is a topic of increasing interest in argumentation-based dialogue because it relates to the scope that an agent has for applying strategic manoeuvering to obtain the outcomes that it requires [19, 22, 24]. Our work identifes the limits on such rhetorical manoeuvering, showing when it can and cannot have an effect. 2. BACKGROUND A dialogue is a sequence of messages passed between two or more members of a set of agents A. An agent α maintains a knowledge base, Σ., containing formulas of a propositional language L and having no deductive closure. Agent α also maintains the set of its past utterances, called the "commitment store", CS. . We refer to this as an agent's "public knowledge", since it contains information that is shared with other agents. are "private" to α. ♦ is the classical inference relation, that ≡ stands for logical equivalence, and we use Δ to denote all the information available to an agent. Thus in a dialogue between two agents α and β, Δ. = Σ. ∪ CS. ∪ CSp, so the commitment store CS. consisting of the assertions that have been made public. In some dialogue games, such as those in [18] anything in CS. is either in Σ. or can be derived from it. In other dialogue games, such as 6. SUMMARY This paper has studied the different ideas of relevance in argumentation-based dialogue, identifying the relationship between these ideas, and showing how they can impact the extent to which the way that agents choose moves in a dialogue--what some authors have called the strategy and tactics of a dialogue. This extends existing work on relvance, such as [3, 15] by showing how different notions of relevance can have an effect on the outcome of a dialogue, in particular when they render the outcome predetermined. This connection extends the work of [18] which considered dialogue outcome, but stopped short of identifying the conditions under which it is predetermined. There are two ways we are currently trying to extend this work, both of which will generalise the results and extend its applicability. First, we want to relax the restrictions that 1012 The Sixth Intl. . Joint Conf. Second, we want to extend the ideas of relevance to cope with moves that do not only add undercutting arguments, but also supporting arguments, thus taking account of bipolar argumentation frameworks [5].
I-60
On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks
As more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads. Indeed, car manufacturers are already equipping their cars with such devices. Though, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas. Nonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers. These car owners will try to manipulate their agents such that they transmit false data to their peers. Using a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented.
[ "self-interest agent", "self-interest agent", "vehicular network", "intellig agent", "social network", "journei length", "chao", "selfinterest agent", "agent-base deploi applic", "artifici social system" ]
[ "P", "P", "P", "M", "R", "U", "U", "M", "M", "M" ]
On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks∗ Raz Lin and Sarit Kraus Computer Science Department Bar-Ilan University Ramat-Gan, Israel {linraz,sarit}@cs. biu.ac.il Yuval Shavitt School of Electrical Engineering Tel-Aviv University, Israel shavitt@eng.tau.ac.il ABSTRACT As more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads. Indeed, car manufacturers are already equipping their cars with such devices. Though, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas. Nonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers. These car owners will try to manipulate their agents such that they transmit false data to their peers. Using a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Intelligent agents General Terms Experimentation 1. INTRODUCTION As technology advances, more and more cars are being equipped with devices, which enable them to act as autonomous agents. An important advancement in this respect is the introduction of ad-hoc communication networks (such as Wi-Fi), which enable the exchange of information between cars, e.g., for locating road congestions [1] and optimal routes [15] or improving traffic safety [2]. Vehicle-To-Vehicle (V2V) communication is already onboard by some car manufactures, enabling the collaboration between different cars on the road. For example, GM``s proprietary algorithm [6], called the threat assessment algorithm, constantly calculates, in real time, other vehicles'' positions and speeds, and enables messaging other cars when a collision is imminent; Also, Honda has began testing its system in which vehicles talk with each other and with the highway system itself [7]. In this paper, we investigate the attraction of being a selfish agent in vehicular networks. That is, we investigate the benefits achieved by car owners, who tamper with on-board devices and incorporate their own self-interested agents in them, which act for their benefit. We build on the notion of Gossip Networks, introduced by Shavitt and Shay [15], in which the agents can obtain road congestion information by gossiping with peer agents using ad-hoc communication. We recognize two typical behaviors that the self-interested agents could embark upon, in the context of vehicular networks. In the first behavior, described in Section 4, the objective of the self-interested agents is to maximize their own utility, expressed by their average journey duration on the road. This situation can be modeled in real life by car owners, whose aim is to reach their destination as fast as possible, and would like to have their way free of other cars. To this end they will let their agents cheat the other agents, by injecting false information into the network. This is achieved by reporting heavy traffic values for the roads on their route to other agents in the network in the hope of making the other agents believe that the route is jammed, and causing them to choose a different route. The second type of behavior, described in Section 5, is modeled by the self-interested agents'' objective to cause disorder in the network, more than they are interested in maximizing their own utility. This kind of behavior could be generated, for example, by vandalism or terrorists, who aim to cause as much mayhem in the network as possible. We note that the introduction of self-interested agents to the network, would most probably motivate other agents to try and detect these agents in order to minimize their effect. This is similar, though in a different context, to the problem introduced by Lamport et al. [8] as the Byzantine Generals Problem. However, the introduction of mechanisms to deal with self-interested agents is costly and time consuming. In this paper we focus mainly on the attractiveness of selfish behavior by these agents, while we also provide some insights 327 978-81-904262-7-5 (RPS) c 2007 IFAAMAS into the possibility of detecting self-interested agents and minimizing their effect. To demonstrate the benefits achieved by self-interested agents, we have used a simulation environment, which models the transportation network in a central part of a large real city. The simulation environment is further described in Section 3. Our simulations provide insights to the benefits of self-interested agents cheating. Our findings can motivate future research in this field in order to minimize the effect of selfish-agents. The rest of this paper is organized as follows. In Section 2 we review related work in the field of self-interested agents and V2V communications. We continue and formally describe our environment and simulation settings in Section 3. Sections 4 and 5 describe the different behaviors of the selfinterested agents and our findings. Finally, we conclude the paper with open questions and future research directions. 2. RELATED WORK In their seminal paper, Lamport et al. [8] describe the Byzantine Generals problem, in which processors need to handle malfunctioning components that give conflicting information to different parts of the system. They also present a model in which not all agents are connected, and thus an agent cannot send a message to all the other agents. Dolev et al. [5] has built on this problem and has analyzed the number of faulty agents that can be tolerated in order to eventually reach the right conclusion about true data. Similar work is presented by Minsky et al. [11], who discuss techniques for constructing gossip protocols that are resilient to up to t malicious host failures. As opposed to the above works, our work focuses on vehicular networks, in which the agents are constantly roaming the network and exchanging data. Also, the domain of transportation networks introduces dynamic data, as the load of the roads is subject to change. In addition, the system in transportation networks has a feedback mechanism, since the load in the roads depends on the reports and the movement of the agents themselves. Malkhi et al. [10] present a gossip algorithm for propagating information in a network of processors, in the presence of malicious parties. Their algorithm prevents the spreading of spurious gossip and diffuses genuine data. This is done in time, which is logarithmic in the number of processes and linear in the number of corrupt parties. Nevertheless, their work assumes that the network is static and also that the agents are static (they discuss a network of processors). This is not true for transportation networks. For example, in our model, agents might gossip about heavy traffic load of a specific road, which is currently jammed, yet this information might be false several minutes later, leaving the agents to speculate whether the spreading agents are indeed malicious or not. In addition, as the agents are constantly moving, each agent cannot choose with whom he interacts and exchanges data. In the context of analyzing the data and deciding whether the data is true or not, researchers have focused on distributed reputation systems or decision mechanisms to decide whether or not to share data. Yu and Singh [18] build a social network of agents'' reputations. Every agent keeps a list of its neighbors, which can be changed over time, and computes the trustworthiness of other agents by updating the current values of testimonies obtained from reliable referral chains. After a bad experience with another agent every agent decreases the rating of the ``bad'' agent and propagates this bad experience throughout the network so that other agents can update their ratings accordingly. This approach might be implemented in our domain to allow gossip agents to identify self-interested agents and thus minimize their effect. However, the implementation of such a mechanism is an expensive addition to the infrastructure of autonomous agents in transportation networks. This is mainly due to the dynamic nature of the list of neighbors in transportation networks. Thus, not only does it require maintaining the neighbors'' list, since the neighbors change frequently, but it is also harder to build a good reputation system. Leckie et al. [9] focus on the issue of when to share information between the agents in the network. Their domain involves monitoring distributed sensors. Each agent monitors a subset of the sensors and evaluates a hypothesis based on the local measurements of its sensors. If the agent believes that a hypothesis is sufficient likely he exchanges this information with the other agents. In their domain, the goal of all the agents is to reach a global consensus about the likelihood of the hypothesis. In our domain, however, as the agents constantly move, they have many samples, which they exchange with each other. Also, the data might also vary (e.g., a road might be reported as jammed, but a few minutes later it could be free), thus making it harder to decide whether to trust the agent, who sent the data. Moreover, the agent might lie only about a subset of its samples, thus making it even harder to detect his cheating. Some work has been done in the context of gossip networks or transportation networks regarding the spreading of data and its dissemination. Datta et al. [4] focus on information dissemination in mobile ad-hoc networks (MANET). They propose an autonomous gossiping algorithm for an infrastructure-less mobile ad-hoc networking environment. Their autonomous gossiping algorithm uses a greedy mechanism to spread data items in the network. The data items are spread to immediate neighbors that are interested in the information, and avoid ones that are not interested. The decision which node is interested in the information is made by the data item itself, using heuristics. However, their work concentrates on the movement of the data itself, and not on the agents who propagate the data. This is different from our scenario in which each agent maintains the data it has gathered, while the agent itself roams the road and is responsible (and has the capabilities) for spreading the data to other agents in the network. Das et al. [3] propose a cooperative strategy for content delivery in vehicular networks. In their domain, peers download a file from a mesh and exchange pieces of the file among themselves. We, on the other hand, are interested in vehicular networks in which there is no rule forcing the agents to cooperate among themselves. Shibata et al. [16] propose a method for cars to cooperatively and autonomously collect traffic jam statistics to estimate arrival time to destinations for each car. The communication is based on IEEE 802.11, without using a fixed infrastructure on the ground. While we use the same domain, we focus on a different problem. Shibata et al. [16] mainly focus on efficiently broadcasting the data between agents (e.g., avoid duplicates and communication overhead), as we focus on the case where agents are not cooperative in 328 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) nature, and on how selfish agents affect other agents and the network load. Wang et al. [17] also assert, in the context of wireless networks, that individual agents are likely to do what is most beneficial for their owners, and will act selfishly. They design a protocol for communication in networks in which all agents are selfish. Their protocol motivates every agent to maximize its profit only when it behaves truthfully (a mechanism of incentive compatibility). However, the domain of wireless networks is quite different from the domain of transportation networks. In the wireless network, the wireless terminal is required to contribute its local resources to transmit data. Thus, Wang et al. [17] use a payment mechanism, which attaches costs to terminals when transmitting data, and thus enables them to maximize their utility when transmitting data, instead of acting selfishly. Unlike this, in the context of transportation networks, constructing such a mechanism is not quite a straightforward task, as self-interested agents and regular gossip agents might incur the same cost when transmitting data. The difference between the two types of agents only exists regarding the credibility of the data they exchange. In the next section, we will describe our transportation network model and gossiping between the agents. We will also describe the different agents in our system. 3. MODEL AND SIMULATIONS We first describe the formal transportation network model, and then we describe the simulations designs. 3.1 Formal Model Following Shavitt and Shay [15] and Parshani [13], the transportation network is represented by a directed graph G(V, E), where V is the set of vertices representing junctions, and E is the set of edges, representing roads. An edge e ∈ E is associated with a weight w > 0, which specifies the time it takes to traverse the road associated with that edge. The roads'' weights vary in time according to the network (traffic) load. Each car, which is associated with an autonomous agent, is given a pair of origin and destination points (vertices). A journey is defined as the (not necessarily simple) path taken by an agent between the origin vertex and the destination vertex. We assume that there is always a path between a source and a destination. A journey length is defined as the sum of all weights of the edges constituting this path. Every agent has to travel between its origin and destination points and aims to minimize its journey length. Initially, agents are ignorant about the state of the roads. Regular agents are only capable of gathering information about the roads as they traverse them. However, we assume that some agents have means of inter-vehicle communication (e.g., IEEE 802.11) with a given communication range, which enables them to communicate with other agents with the same device. Those agents are referred to as gossip agents. Since the communication range is limited, the exchange of information using gossiping is done in one of two ways: (a) between gossip agents passing one another, or (b) between gossip agents located at the same junction. We assume that each agent stores the most recent information it has received or gathered around the edges in the network. A subset of the gossip agents are those agents who are selfinterested and manipulate the devices for their own benefit. We will refer to these agents as self-interested agents. A detailed description of their behavior is given in Sections 4 and 5. 3.2 Simulation Design Building on [13], the network in our simulations replicates a central part of a large city, and consists of 50 junctions and 150 roads, which are approximately the number of main streets in the city. Each simulation consists of 6 iterations. The basic time unit of the iteration is a step, which equivalents to about 30 seconds. Each iteration simulates six hours of movements. The average number of cars passing through the network during the iteration is about 70,000 and the average number of cars in the network at a specific time unit is about 3,500 cars. In each iteration the same agents are used with the same origin and destination points, whereas the data collected in earlier iterations is preserved in the future iterations (referred to as the history of the agent). This allows us to simulate somewhat a daily routine in the transportation network (e.g., a working week). Each of the experiments that we describe below is run with 5 different traffic scenarios. Each such traffic scenario differs from one another by the initial load of the roads and the designated routes of the agents (cars) in the network. For each such scenario 5 simulations are run, creating a total of 25 simulations for each experiment. It has been shown by Parshani et al. [13, 14] that the information propagation in the network is very efficient when the percentage of gossiping agents is 10% or more. Yet, due to congestion caused by too many cars rushing to what is reported as the less congested part of the network 20-30% of gossiping agents leads to the most efficient routing results in their experiments. Thus, in our simulation, we focus only on simulations in which the percentage of gossip agents is 20%. The simulations were done with different percentages of self-interested agents. To gain statistical significance we ran each simulation with changes in the set of the gossip agents, and the set of the self-interested agents. In order to gain a similar ordinal scale, the results were normalized. The normalized values were calculated by comparing each agent``s result to his results when the same scenario was run with no self-interested agents. This was done for all of the iterations. Using the normalized values enabled us to see how worse (or better) each agent would perform compared to the basic setting. For example, if an average journey length of a certain agent in iteration 1 with no selfinterested agent was 50, and the length was 60 in the same scenario and iteration in which self-interested agents were involved, then the normalized value for that agent would be 60/50 = 1.2. More details regarding the simulations are described in Sections 4 and 5. 4. SPREADING LIES, MAXIMIZING UTILITY In the first set of experiments we investigated the benefits achieved by the self-interested agents, whose aim was to minimize their own journey length. The self-interested agents adopted a cheating approach, in which they sent false data to their peers. In this section we first describe the simulations with the self-interested agents. Then, we model the scenario as a The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 329 game with two types of agents, and prove that the equilibrium result can only be achieved when there is no efficient exchange of gossiping information in the network. 4.1 Modeling the Self-Interested Agents'' Behavior While the gossip agents gather data and send it to other agents, the self-interested agents'' behavior is modeled as follows: 1. Calculate the shortest path from origin to destination. 2. Communicate the following data to other agents: (a) If the road is not in the agent``s route - send the true data about it (e.g., data about roads it has received from other agents) (b) For all roads in the agent``s route, which the agent has not yet traversed, send a random high weight. Basically, the self-interested agent acts the same as the gossip agent. It collects data regarding the weight of the roads (either by traversing the road or by getting the data from other agents) and sends the data it has collected to other agents. However, the self-interested agent acts differently when the road is in its route. Since the agent``s goal is to reach its destination as fast as possible, the agent will falsely report that all the roads in its route are heavily congested. This is in order to free the path for itself, by making other agents recalculate their paths, this time without including roads on the self-interested agent``s route. To this end, for all the roads in its route, which the agent has not yet passed, the agent generates a random weight, which is above the average weight of the roads in the network. It then associates these new weights with the roads in its route and sends them to the other agents. While an agent can also divert cars from its route by falsely reporting congested roads in parallel to its route as free, this behavior is not very likely since other agents, attempting to use the roads, will find the mistake within a short time and spread the true congestion on the road. On the other hand, if an agent manages to persuade other agents not to use a road, it will be harder for them to detect that the said roads are not congested. In addition, to avoid being influenced by its own lies and other lies spreading in the network, all self-interested agents will ignore data received about roads with heavy traffic (note that data about roads that are not heavily traffic will not be ignored)1 . In the next subsection we describe the simulation results, involving the self-interested agents. 4.2 Simulation Results To test the benefits of cheating by the self-interested agents we ran several experiments. In the first set of experiments, we created a scenario, in which a small group of self-interested agents spread lies on the same route, and tested its effect on the journey length of all the agents in the network. 1 In other simulations we have run, in which there had been several real congestions in the network, we indeed saw that even when the roads are jammed, the self-interested agents were less affected if they ignored all reported heavy traffic, since by such they also discarded all lies roaming the network Table 1: Normalized journey length values, selfinterested agents with the same route Iteration Self-Interested Gossip - Gossip - Regular Number Agents SR Others Agents 1 1.38 1.27 1.06 1.06 2 0.95 1.56 1.18 1.14 3 1.00 1.86 1.28 1.17 4 1.06 2.93 1.35 1.16 5 1.13 2.00 1.40 1.17 6 1.08 2.02 1.43 1.18 Thus, several cars, which had the same origin and destination points, were designated as self-interested agents. In this simulation, we selected only 6 agents to be part of the group of the self-interested agents, as we wanted to investigate the effect achieved by only a small number of agents. In each simulation in this experiment, 6 different agents were randomly chosen to be part of the group of self-interested agents, as described above. In addition, one road, on the route of these agents, was randomly selected to be partially blocked, letting only one car go through that road at each time step. About 8,000 agents were randomly selected as regular gossip agents, and the other 32,000 agents were designated as regular agents. We analyzed the average journey length of the self-interested agents as opposed to the average journey length of other regular gossip agents traveling along the same route. Table 1 summarizes the normalized results for the self-interested agents, the gossip agents (those having the same origin and destination points as the self-interested agents, denoted Gossip - SR, and all other gossip agents, denoted Gossip - Others) and the regular agents, as a function of the iteration number. We can see from the results that the first time the selfinterested agents traveled the route while spreading the false data about the roads did not help them (using the paired t-test we show that those agents had significantly lower journey lengths in the scenario in which they did not spread any lies, with p < 0.01). This is mainly due to the fact that the lies do not bypass the self-interested agent and reach other cars that are ahead of the self-interested car on the same route. Thus, spreading the lies in the first iteration does not help the self-interested agent to free the route he is about to travel, in the first iteration. Only when the self-interested agents had repeated their journey in the next iteration (iteration 2) did it help them significantly (p = 0.04). The reason for this is that other gossip agents received this data and used it to recalculate their shortest path, thus avoiding entrance to the roads, for which the self-interested agents had spread false information about congestion. It is also interesting to note the large value attained by the self-interested agents in the first iteration. This is mainly due to several self-interested agents, who entered the jammed road. This situation occurred since the self-interested agents ignored all heavy traffic data, and thus ignored the fact that the road was jammed. As they started spreading lies about this road, more cars shifted from this route, thus making the road free for the future iterations. However, we also recall that the self-interested agents ignore all information about the heavy traffic roads. Thus, 330 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 2: Normalized journey length values, spreading lies for a beneficiary agent Iteration Beneficiary Gossip - Gossip - Regular Number Agent SR Others Agents 1 1.10 1.05 0.94 1.11 2 1.09 1.14 0.99 1.14 3 1.04 1.19 1.02 1.14 4 1.03 1.26 1.03 1.14 5 1.05 1.32 1.05 1.12 6 0.92 1.40 1.06 1.11 when the network becomes congested, more self-interested cars are affected, since they might enter jammed roads, which they would otherwise not have entered. This can be seen, for example, in iterations 4-6, in which the normalized value of the self-interested agents increased above 1.00. Using the paired t-test to compare these values with the values achieved by these agents when no lies are used, we see that there is no significant difference between the two scenarios. As opposed to the gossip agents, we can see how little effect the self-interested agents have on the regular agents. As compared to the gossip agents on the same route that have traveled as much as 193% more, when self-interested agents are introduced, the average journey length for the regular agents has only increased by about 15%. This result is even lower than the effect on other gossip agents in the entire network. Since we noticed that cheating by the self-interested agents does not benefit them in the first iteration, we devised another set of experiments. In the second set of experiments, the self-interested agents have the objective to help another agent, who is supposed to enter the network some time after the self-interested agent entered. We refer to the latter agent as the beneficiary agent. Just like a self-interested agent, the beneficiary agent also ignores all data regarding heavy traffic. In real-life this can be modeled, for example, by a husband, who would like to help his wife find a faster route to her destination. Table 2 summarizes the normalized values for the different agents. As in the first set of experiments, 5 simulations were run for each scenario, with a total of 25 simulations. In each of these simulation one agent was randomly selected as a self-interested agent, and then another agent, with the same origin as the selfinterested agent, was randomly selected as the beneficiary agent. The other 8,000 and 32,000 agents were designated as regular gossip agents and regular agents, respectively. We can see that as the number of iterations advances, the lower the normalized value for the beneficiary agent. In this scenario, just like the previous one, in the first iterations, not only does the beneficiary agent not avoid the jammed roads, since he ignores all heavy traffic, he also does not benefit from the lies spread by the self-interested agent. This is due to the fact that the lies are not yet incorporated by other gossip agents. Thus, if we compare the average journey length in the first iteration when lies are spread and when there are no lies, the average is significantly lower when there are no lies (p < 0.03). On the other hand, if we compare the average journey length in all of the iterations, there is no significant difference between the two settings. Still, in most of the iterations, the average journey length of the beneficiary agent is longer than in the case when no lies are spread. We can also see the impact on the other agents in the system. While the gossip agents, which are not on the route of the beneficiary agent, virtually are not affected by the self-interested agent, those on the route and the regular agents are affected and have higher normalized values. That is, even with just one self-interested car, we can see that both the gossip agents that follow the same route as the lies spread by the self-interested agents, and other regular agents, increase their journey length by more than 14%. In our third set of experiments we examined a setting in which there was an increasing number of agents, and the agents did not necessarily have the same origin and destination points. To model this we randomly selected selfinterested agents, whose objective was to minimize their average journey length, assuming the cars were repeating their journeys (that is, more than one iteration was made). As opposed to the first set of experiments, in this set the self-interested agents were selected randomly, and we did not enforce the constraint that they will all have the same origin and destination points. As in the previous sets of experiments we ran 5 different simulations per scenario. In each simulation 11 runs were made, each run with different numbers of self-interested agents: 0 (no self-interested agents), 1, 2, 4, 8, and 16. Each agent adopted the behavior modeled in Section 4.1. Figure 1 shows the normalized value achieved by the self-interested agents as a function of their number. The figure shows these values for iterations 2-6. The first iteration is not shown intentionally, as we assume repeated journeys. Also, we have seen in the previous set of experiments and we have provided explanations as to why the self-interested agents do not gain much from their behavior in the first iteration. 0.955 0.96 0.965 0.97 0.975 0.98 0.985 0.99 0.995 1 1.005 1.01 1.015 1.02 1.025 1.03 0 1 2 3 4 5 6 7 8 9 10111213141516 Self-Interested Agents Number NormalizedValue Iteration 2 Iteration 3 Iteration 4 Iteration 5 Iteration 6 Figure 1: Self-interested agents normalized values as a function of the number of self-interested agents. Using these simulations we examined what the threshold could be for the number of randomly selected self-interested agents in order to allow themselves to benefit from their selfish behavior. We can see that up to 8 self-interested agents, the average normalized value is below 1. That is, they benefit from their malicious behavior. In the case of one self-interested agent there is a significant difference between the average journey length of when the agent spread lies and when no lies are spread (p < 0.001), while when The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 331 there are 2, 4, 8 and 16 self-interested agents there is no significance difference. Yet, as the number of self-interested agents increases, the normalized value also increases. In such cases, the normalized value is larger than 1, and the self-interested agents journey length becomes significantly higher than their journey length, in cases where there are no self-interested agents in the system. In the next subsection we analyze the scenario as a game and show that when in equilibrium the exchange of gossiping between the agents becomes inefficient. 4.3 When Gossiping is Inefficient We continued and modeled our scenario as a game, in order to find the equilibrium. There are two possible types for the agents: (a) regular gossip agents, and (b) self-interested agents. Each of these agents is a representative of its group, and thus all agents in the same group have similar behavior. We note that the advantage of using gossiping in transportation networks is to allow the agents to detect anomalies in the network (e.g., traffic jams) and to quickly adapt to them by recalculating their routes [14]. We also assume that the objective of the self-interested agents is to minimize their own journey length, thus they spread lies on their routes, as described in Section 4.1. We also assume that sophisticated methods for identifying the self-interested agents or managing reputation are not used. This is mainly due to the complexity of incorporating and maintaining such mechanisms, as well as due to the dynamics of the network, in which interactions between different agents are frequent, agents may leave the network, and data about the road might change as time progresses (e.g., a road might be reported by a regular gossip agent as free at a given time, yet it may currently be jammed due to heavy traffic on the road). Let Tavg be the average time it takes to traverse an edge in the transportation network (that is, the average load of an edge). Let Tmax be the maximum time it takes to traverse an edge. We will investigate the game, in which the self-interested and the regular gossip agents can choose the following actions. The self-interested agents can choose how much to lie, that is, they can choose to spread how long (not necessarily the true duration) it takes to traverse certain roads. Since the objective of the self-interested agents is to spread messages as though some roads are jammed, the traversal time they report is obviously larger than the average time. We denote the time the self-interested agents spread as Ts, such that Tavg ≤ Ts ≤ Tmax. Motivated by the results of the simulations we have described above, we saw that the agents are less affected if they discard the heavy traffic values. Thus, the regular gossip cars, attempting to mitigate the effect of the liars, can choose a strategy to ignore abnormal congestion values above a certain threshold, Tg. Obviously, Tavg ≤ Tg ≤ Tmax. In order to prevent the gossip agents from detecting the lies and just discarding those values, the self-interested agents send lies in a given range, [Ts, Tmax], with an inverse geometric distribution, that is, the higher the T value, the higher its frequency. Now we construct the utility functions for each type of agents, which is defined by the values of Ts and Tg. If the self-interested agents spread traversal times higher than or equal to the regular gossip cars'' threshold, they will not benefit from those lies. Thus, the utility value of the selfinterested agents in this case is 0. On the other hand, if the self-interested agents spread traversal time which is lower than the threshold, they will gain a positive utility value. From the regular gossip agents point-of-view, if they accept messages from the self-interested agents, then they incorporate the lies in their calculation, thus they will lose utility points. On the other hand, if they discard the false values the self-interested agents send, that is, they do not incorporate the lies, they will gain utility values. Formally, we use us to denote the utility of the self-interested agents and ug to denote the utility of the regular gossip agents. We also denote the strategy profile in the game as {Ts, Tg}. The utility functions are defined as: us = 0 if Ts ≥ Tg Ts − Tavg + 1 if Ts < Tg (1) ug = Tg − Tavg if Ts ≥ Tg Ts − Tg if Ts < Tg (2) We are interested in finding the Nash equilibrium. We recall from [12], that the Nash equilibrium is a strategy profile, where no player has anything to gain by deviating from his strategy, given that the other agent follows his strategy profile. Formally, let (S, u) denote the game, where S is the set of strategy profiles and u is the set of utility functions. When each agent i ∈ {regular gossip, self-interested} chooses a strategy Ti resulting in a strategy profile T = (Ts, Tg) then agent i obtains a utility of ui (T). A strategy profile T∗ ∈ S is a Nash equilibrium if no deviation in the strategy by any single agent is profitable, that is, if for all i, ui (T∗ ) ≥ ui (Ti, T∗ −i). That is, (Ts, Tg) is a Nash equilibrium if the self-interested agents have no other value Ts such that us (Ts, Tg) > us (Ts, Tg), and similarly for the gossip agents. We now have the following theorem. Theorem 4.1. (Tavg, Tavg) is the only Nash equilibrium. Proof. First we will show that (Tavg, Tavg) is a Nash equilibrium. Assume, by contradiction, that the gossip agents choose another value Tg > Tavg. Thus, ug (Tavg, Tg ) = Tavg − Tg < 0. On the other hand, ug (Tavg, Tavg) = 0. Thus, the regular gossip agents have no incentive to deviate from this strategy. The self-interested agents also have no incentive to deviate from this strategy. By contradiction, again assume that the self-interested agents choose another value Ts > Tavg. Thus, us (Ts , Tavg) = 0, while us (Tavg, Tavg) = 0. We will now show that the above solution is unique. We will show that any other tuple (Ts, Tg), such that Tavg < Tg ≤ Tmax and Tavg < Ts ≤ Tmax is not a Nash equilibrium. We have three cases. In the first Tavg < Tg < Ts ≤ Tmax. Thus, us (Ts, Tg) = 0 and ug (Ts, Tg) = Tg − Tavg. In this case, the regular gossip agents have an incentive to deviate and choose another strategy Tg + 1, since by doing so they increase their own utility: ug (Ts, Tg + 1) = Tg + 1 − Tavg. In the second case we have Tavg < Ts < Tg ≤ Tmax. Thus, ug (Ts, Tg) = Ts − Tg < 0. Also, the regular gossip agents have an incentive to deviate and choose another strategy Tg −1, in which their utility value is higher: ug (Ts, Tg −1) = Ts − Tg + 1. In the last case we have Tavg < Ts = Tg ≤ Tmax. Thus, us (Ts, Tg) = Ts − Tg = 0. In this case, the self-interested agents have an incentive to deviate and choose another strategy Tg − 1, in which their utility value is higher: us (Tg − 1, Tg) = Tg − 1 − Tavg + 1 = Tg − Tavg > 0. 332 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 3: Normalized journey length values for the first iteration Self-Interested Self-Interested Gossip Regular Agents Number Agents Agents Agents 1 0.98 1.01 1.05 2 1.09 1.02 1.05 4 1.07 1.02 1.05 8 1.06 1.04 1.05 16 1.03 1.08 1.06 32 1.07 1.17 1.08 50 1.12 1.28 1.1 64 1.14 1.4 1.13 80 1.15 1.5 1.14 100 1.17 1.63 1.16 Table 4: Normalized journey length values for all iterations Self-Interested Self-Interested Gossip Regular Agents Number Agents Agents Agents 1 0.98 1.02 1.06 2 1.0 1.04 1.07 4 1.0 1.08 1.07 8 1.01 1.33 1.11 16 1.02 1.89 1.17 32 1.06 2.46 1.25 50 1.13 2.24 1.29 64 1.21 2.2 1.32 80 1.21 2.13 1.27 100 1.26 2.11 1.27 The above theorem proves that the equilibrium point is reached only when the self-interested agents send the time to traverse certain edges equals the average time, and on the other hand the regular gossip agents discard all data regarding roads that are associated with an average time or higher. Thus, for this equilibrium point the exchange of gossiping information between agents is inefficient, as the gossip agents are unable to detect any anomalies in the network. In the next section we describe another scenario for the self-interested agents, in which they are not concerned with their own utility, but rather interested in maximizing the average journey length of other gossip agents. 5. SPREADING LIES, CAUSING CHAOS Another possible behavior that can be adopted by selfinterested agents is characterized by their goal to cause disorder in the network. This can be achieved, for example, by maximizing the average journey length of all agents, even at the cost of maximizing their own journey length. To understand the vulnerability of the gossip based transportation support system, we ran 5 different simulations for each scenario. In each simulation different agents were randomly chosen (using a uniform distribution) to act as gossip agents, among them self-interested agents were chosen. Each self-interested agent behaved in the same manner as described in Section 4.1. Every simulation consisted of 11 runs with each run comprising different numbers of self-interested agents: 0 (no selfinterested agents), 1, 2, 4, 8, 16, 32, 50, 64, 80 and 100. Also, in each run the number of self-interested agents was increased incrementally. For example: the run with 50 selfinterested agents consisted of all the self-interested agents that were used in the run with 32 self-interested agents, but with an additional 18 self-interested agents. Tables 3 and 4 summarize the normalized journey length for the self-interested agents, the regular gossip agents and the regular (non-gossip) agents. Table 3 summarizes the data for the first iteration and Table 4 summarizes the data for the average of all iterations. Figure 2 demonstrates the changes in the normalized values for the regular gossip agents and the regular agents, as a function of the iteration number. Similar to the results in our first set of experiments, described in Section 4.2, we can see that randomly selected self-interested agents who follow different randomly selected routes do not benefit from their malicious behavior (that is, their average journey length does not decrease). However, when only one self-interested agent is involved, it does benefit from the malicious behavior, even in the first iteration. The results also indicate that the regular gossip agents are more sensitive to malicious behavior than regular agentsthe average journey length for the gossip agents increases significantly (e.g., with 32 self-interested agents the average journey length for the gossip agents was 146% higher than in the setting with no self-interested agents at all, as opposed to an increase of only 25% for the regular agents). In contrast, these results also indicate that the self-interested agents do not succeed in causing a significant load in the network by their malicious behavior. 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 1 2 3 4 5 6 Iteration Number NormalizedValue 32 self-interested agents, gossip agents normalized value 100 self-interested agents, gossip agents normalized value 32 self-interested agents, regular agents normalized value 100 self-interested agents, regular agents normalized value Figure 2: Gossip and regular agents normalized values, as a function of the iteration. Since the goal of the self-interested agents in this case is to cause disorder in the network rather than use the lies for their own benefits, the question arises as to why would the behavior of the self-interested agents be to send lies about their routes only. Furthermore, we hypothesize that if they all send lies about the same major roads the damage they might inflict on the entire network would be larger that had each of them sent lies about its own route. To examine this hypothesis, we designed another set of experiments. In this set of experiments, all the self-interested agents spread lies about the same 13 main roads in the network. However, the results show quite a smaller impact on other gossip and reguThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 333 Table 5: Normalized journey length values for all iterations. Network with congestions. Self-Interested Self-Interested Gossip Regular Agents Number Agents Agents Agents 1 1.04 1.02 1.22 2 1.06 1.04 1.22 4 1.04 1.06 1.23 8 1.07 1.15 1.26 16 1.09 1.55 1.39 32 1.12 2.25 1.56 50 1.24 2.25 1.60 64 1.28 2.47 1.63 80 1.50 2.41 1.64 100 1.69 2.61 1.75 lar agents in the network. The average normalized value for the gossip agents in these simulations was only about 1.07, as opposed to 1.7 in the original scenario. When analyzing the results we saw that although the false data was spread, it did not cause other gossip cars to change their route. The main reason was that the lies were spread on roads that were not on the route of the self-interested agents. Thus, it took the data longer to reach agents on the main roads, and when the agents reached the relevant roads this data was too old to be incorporated in the other agents calculations. We also examined the impact of sending lies in order to cause chaos when there are already congestions in the network. To this end, we simulated a network in which 13 main roads are jammed. The behavior of the self-interested agents is as described in Section 4.1, and the self-interested agents spread lies about their own route. The simulation results, detailed in Table 5, show that there is a greater incentive for the self-interested agents to cheat when the network is already congested, as their cheating causes more damage to the other agents in the network. For example, whereas the average journey length of the regular agents increased only by about 15% in the original scenario, in which the network was not congested, in this scenario the average journey length of the agents had increased by about 60%. 6. CONCLUSIONS In this paper we investigated the benefits achieved by self-interested agents in vehicular networks. Using simulations we investigated two behaviors that might be taken by self-interested agents: (a) trying to minimize their journey length, and (b) trying to cause chaos in the network. Our simulations indicate that in both behaviors the selfinterested agents have only limited success achieving their goal, even if no counter-measures are taken. This is in contrast to the greater impact inflicted by self-interested agents in other domains (e.g., E-Commerce). Some reasons for this are the special characteristics of vehicular networks and their dynamic nature. While the self-interested agents spread lies, they cannot choose which agents with whom they will interact. Also, by the time their lies reach other agents, they might become irrelevant, as more recent data has reached the same agents. Motivated by the simulation results, future research in this field will focus on modeling different behaviors of the self-interested agents, which might cause more damage to the network. Another research direction would be to find ways of minimizing the effect of selfish-agents by using distributed reputation or other measures. 7. REFERENCES [1] A. Bejan and R. Lawrence. Peer-to-peer cooperative driving. In Proceedings of ISCIS, pages 259-264, Orlando, USA, October 2002. [2] I. Chisalita and N. Shahmehri. A novel architecture for supporting vehicular communication. In Proceedings of VTC, pages 1002-1006, Canada, September 2002. [3] S. Das, A. Nandan, and G. Pau. Spawn: A swarming protocol for vehicular ad-hoc wireless networks. In Proceedings of VANET, pages 93-94, 2004. [4] A. Datta, S. Quarteroni, and K. Aberer. Autonomous gossiping: A self-organizing epidemic algorithm for selective information dissemination in mobile ad-hoc networks. In Proceedings of IC-SNW, pages 126-143, Maison des Polytechniciens, Paris, France, June 2004. [5] D. Dolev, R. Reischuk, and H. R. Strong. Early stopping in byzantine agreement. JACM, 37(4):720-741, 1990. [6] GM. Threat assessment algorithm. http://www.nhtsa.dot.gov/people/injury/research/pub/ acas/acas-fieldtest/, 2000. [7] Honda. http://world.honda.com/news/2005/c050902.html. [8] Lamport, Shostak, and Pease. The byzantine generals problem. In Advances in Ultra-Dependable Distributed Systems, N. Suri, C. J. Walter, and M. M. Hugue (Eds.) . IEEE Computer Society Press, 1982. [9] C. Leckie and R. Kotagiri. Policies for sharing distributed probabilistic beliefs. In Proceedings of ACSC, pages 285-290, Adelaide, Australia, 2003. [10] D. Malkhi, E. Pavlov, and Y. Sella. Gossip with malicious parties. Technical report: 2003-9, School of Computer Science and Engineering - The Hebrew University of Jerusalem, Israel, March 2003. [11] Y. M. Minsky and F. B. Schneider. Tolerating malicious gossip. Distributed Computing, 16(1):49-68, February 2003. [12] M. J. Osborne and A. Rubinstein. A Course In Game Theory. MIT Press, Cambridge MA, 1994. [13] R. Parshani. Routing in gossip networks. Master``s thesis, Department of Computer Science, Bar-Ilan University, Ramat-Gan, Israel, October 2004. [14] R. Parshani, S. Kraus, and Y. Shavitt. A study of gossiping in transportation networks. Submitted for publication, 2006. [15] Y. Shavitt and A. Shay. Optimal routing in gossip networks. IEEE Transactions on Vehicular Technology, 54(4):1473-1487, July 2005. [16] N. Shibata, T. Terauchi, T. Kitani, K. Yasumoto, M. Ito, and T. Higashino. A method for sharing traffic jam information using inter-vehicle communication. In Proceedings of V2VCOM, USA, 2006. [17] W. Wang, X.-Y. Li, and Y. Wang. Truthful multicast routing in selfish wireless networks. In Proceedings of MobiCom, pages 245-259, USA, 2004. [18] B. Yu and M. P. Singh. A social mechanism of reputation management in electronic communities. In Proceedings of CIA, 2000. 334 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks ∗ ABSTRACT As more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads. Indeed, car manufacturers are already equipping their cars with such devices. Though, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas. Nonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers. These car owners will try to manipulate their agents such that they transmit false data to their peers. Using a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented. 1. INTRODUCTION As technology advances, more and more cars are being equipped with devices, which enable them to act as autonomous agents. An important advancement in this respect is the introduction of ad-hoc communication networks (such as Wi-Fi), which enable the exchange of information ∗ This work was supported in part under ISF grant number 8008. between cars, e.g., for locating road congestions [1] and optimal routes [15] or improving traffic safety [2]. Vehicle-To-Vehicle (V2V) communication is already onboard by some car manufactures, enabling the collaboration between different cars on the road. For example, GM's proprietary algorithm [6], called the" threat assessment algorithm", constantly calculates, in real time, other vehicles' positions and speeds, and enables messaging other cars when a collision is imminent; Also, Honda has began testing its system in which vehicles talk with each other and with the highway system itself [7]. In this paper, we investigate the attraction of being a selfish agent in vehicular networks. That is, we investigate the benefits achieved by car owners, who tamper with on-board devices and incorporate their own self-interested agents in them, which act for their benefit. We build on the notion of Gossip Networks, introduced by Shavitt and Shay [15], in which the agents can obtain road congestion information by gossiping with peer agents using ad-hoc communication. We recognize two typical behaviors that the self-interested agents could embark upon, in the context of vehicular networks. In the first behavior, described in Section 4, the objective of the self-interested agents is to maximize their own utility, expressed by their average journey duration on the road. This situation can be modeled in real life by car owners, whose aim is to reach their destination as fast as possible, and would like to have their way free of other cars. To this end they will let their agents cheat the other agents, by injecting false information into the network. This is achieved by reporting heavy traffic values for the roads on their route to other agents in the network in the hope of making the other agents believe that the route is jammed, and causing them to choose a different route. The second type of behavior, described in Section 5, is modeled by the self-interested agents' objective to cause disorder in the network, more than they are interested in maximizing their own utility. This kind of behavior could be generated, for example, by vandalism or terrorists, who aim to cause as much mayhem in the network as possible. We note that the introduction of self-interested agents to the network, would most probably motivate other agents to try and detect these agents in order to minimize their effect. This is similar, though in a different context, to the problem introduced by Lamport et al. [8] as the Byzantine Generals Problem. However, the introduction of mechanisms to deal with self-interested agents is costly and time consuming. In this paper we focus mainly on the attractiveness of selfish behavior by these agents, while we also provide some insights 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS into the possibility of detecting self-interested agents and minimizing their effect. To demonstrate the benefits achieved by self-interested agents, we have used a simulation environment, which models the transportation network in a central part of a large real city. The simulation environment is further described in Section 3. Our simulations provide insights to the benefits of self-interested agents cheating. Our findings can motivate future research in this field in order to minimize the effect of selfish-agents. The rest of this paper is organized as follows. In Section 2 we review related work in the field of self-interested agents and V2V communications. We continue and formally describe our environment and simulation settings in Section 3. Sections 4 and 5 describe the different behaviors of the selfinterested agents and our findings. Finally, we conclude the paper with open questions and future research directions. 2. RELATED WORK In their seminal paper, Lamport et al. [8] describe the Byzantine Generals problem, in which processors need to handle malfunctioning components that give conflicting information to different parts of the system. They also present a model in which not all agents are connected, and thus an agent cannot send a message to all the other agents. Dolev et al. [5] has built on this problem and has analyzed the number of faulty agents that can be tolerated in order to eventually reach the right conclusion about true data. Similar work is presented by Minsky et al. [11], who discuss techniques for constructing gossip protocols that are resilient to up to t malicious host failures. As opposed to the above works, our work focuses on vehicular networks, in which the agents are constantly roaming the network and exchanging data. Also, the domain of transportation networks introduces dynamic data, as the load of the roads is subject to change. In addition, the system in transportation networks has a feedback mechanism, since the load in the roads depends on the reports and the movement of the agents themselves. Malkhi et al. [10] present a gossip algorithm for propagating information in a network of processors, in the presence of malicious parties. Their algorithm prevents the spreading of spurious gossip and diffuses genuine data. This is done in time, which is logarithmic in the number of processes and linear in the number of corrupt parties. Nevertheless, their work assumes that the network is static and also that the agents are static (they discuss a network of processors). This is not true for transportation networks. For example, in our model, agents might gossip about heavy traffic load of a specific road, which is currently jammed, yet this information might be false several minutes later, leaving the agents to speculate whether the spreading agents are indeed malicious or not. In addition, as the agents are constantly moving, each agent cannot choose with whom he interacts and exchanges data. In the context of analyzing the data and deciding whether the data is true or not, researchers have focused on distributed reputation systems or decision mechanisms to decide whether or not to share data. Yu and Singh [18] build a social network of agents' reputations. Every agent keeps a list of its neighbors, which can be changed over time, and computes the trustworthiness of other agents by updating the current values of testimonies obtained from reliable referral chains. After a bad experience with another agent every agent decreases the rating of the' bad' agent and propagates this bad experience throughout the network so that other agents can update their ratings accordingly. This approach might be implemented in our domain to allow gossip agents to identify self-interested agents and thus minimize their effect. However, the implementation of such a mechanism is an expensive addition to the infrastructure of autonomous agents in transportation networks. This is mainly due to the dynamic nature of the list of neighbors in transportation networks. Thus, not only does it require maintaining the neighbors' list, since the neighbors change frequently, but it is also harder to build a good reputation system. Leckie et al. [9] focus on the issue of when to share information between the agents in the network. Their domain involves monitoring distributed sensors. Each agent monitors a subset of the sensors and evaluates a hypothesis based on the local measurements of its sensors. If the agent believes that a hypothesis is sufficient likely he exchanges this information with the other agents. In their domain, the goal of all the agents is to reach a global consensus about the likelihood of the hypothesis. In our domain, however, as the agents constantly move, they have many samples, which they exchange with each other. Also, the data might also vary (e.g., a road might be reported as jammed, but a few minutes later it could be free), thus making it harder to decide whether to trust the agent, who sent the data. Moreover, the agent might lie only about a subset of its samples, thus making it even harder to detect his cheating. Some work has been done in the context of gossip networks or transportation networks regarding the spreading of data and its dissemination. Datta et al. [4] focus on information dissemination in mobile ad-hoc networks (MANET). They propose an autonomous gossiping algorithm for an infrastructure-less mobile ad-hoc networking environment. Their autonomous gossiping algorithm uses a greedy mechanism to spread data items in the network. The data items are spread to immediate neighbors that are interested in the information, and avoid ones that are not interested. The decision which node is interested in the information is made by the data item itself, using heuristics. However, their work concentrates on the movement of the data itself, and not on the agents who propagate the data. This is different from our scenario in which each agent maintains the data it has gathered, while the agent itself roams the road and is responsible (and has the capabilities) for spreading the data to other agents in the network. Das et al. [3] propose a cooperative strategy for content delivery in vehicular networks. In their domain, peers download a file from a mesh and exchange pieces of the file among themselves. We, on the other hand, are interested in vehicular networks in which there is no rule forcing the agents to cooperate among themselves. Shibata et al. [16] propose a method for cars to cooperatively and autonomously collect traffic jam statistics to estimate arrival time to destinations for each car. The communication is based on IEEE 802.11, without using a fixed infrastructure on the ground. While we use the same domain, we focus on a different problem. Shibata et al. [16] mainly focus on efficiently broadcasting the data between agents (e.g., avoid duplicates and communication overhead), as we focus on the case where agents are not cooperative in 328 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) nature, and on how selfish agents affect other agents and the network load. Wang et al. [17] also assert, in the context of wireless networks, that individual agents are likely to do what is most beneficial for their owners, and will act selfishly. They design a protocol for communication in networks in which all agents are selfish. Their protocol motivates every agent to maximize its profit only when it behaves truthfully (a mechanism of incentive compatibility). However, the domain of wireless networks is quite different from the domain of transportation networks. In the wireless network, the wireless terminal is required to contribute its local resources to transmit data. Thus, Wang et al. [17] use a payment mechanism, which attaches costs to terminals when transmitting data, and thus enables them to maximize their utility when transmitting data, instead of acting selfishly. Unlike this, in the context of transportation networks, constructing such a mechanism is not quite a straightforward task, as self-interested agents and regular gossip agents might incur the same cost when transmitting data. The difference between the two types of agents only exists regarding the credibility of the data they exchange. In the next section, we will describe our transportation network model and gossiping between the agents. We will also describe the different agents in our system. 3. MODEL AND SIMULATIONS We first describe the formal transportation network model, and then we describe the simulations designs. 3.1 Formal Model Following Shavitt and Shay [15] and Parshani [13], the transportation network is represented by a directed graph G (V, E), where V is the set of vertices representing junctions, and E is the set of edges, representing roads. An edge e ∈ E is associated with a weight w> 0, which specifies the time it takes to traverse the road associated with that edge. The roads' weights vary in time according to the network (traffic) load. Each car, which is associated with an autonomous agent, is given a pair of origin and destination points (vertices). A journey is defined as the (not necessarily simple) path taken by an agent between the origin vertex and the destination vertex. We assume that there is always a path between a source and a destination. A journey length is defined as the sum of all weights of the edges constituting this path. Every agent has to travel between its origin and destination points and aims to minimize its journey length. Initially, agents are ignorant about the state of the roads. Regular agents are only capable of gathering information about the roads as they traverse them. However, we assume that some agents have means of inter-vehicle communication (e.g., IEEE 802.11) with a given communication range, which enables them to communicate with other agents with the same device. Those agents are referred to as gossip agents. Since the communication range is limited, the exchange of information using gossiping is done in one of two ways: (a) between gossip agents passing one another, or (b) between gossip agents located at the same junction. We assume that each agent stores the most recent information it has received or gathered around the edges in the network. A subset of the gossip agents are those agents who are selfinterested and manipulate the devices for their own benefit. We will refer to these agents as self-interested agents. A detailed description of their behavior is given in Sections 4 and 5. 3.2 Simulation Design Building on [13], the network in our simulations replicates a central part of a large city, and consists of 50 junctions and 150 roads, which are approximately the number of main streets in the city. Each simulation consists of 6 iterations. The basic time unit of the iteration is a step, which equivalents to about 30 seconds. Each iteration simulates six hours of movements. The average number of cars passing through the network during the iteration is about 70,000 and the average number of cars in the network at a specific time unit is about 3,500 cars. In each iteration the same agents are used with the same origin and destination points, whereas the data collected in earlier iterations is preserved in the future iterations (referred to as the history of the agent). This allows us to simulate somewhat a daily routine in the transportation network (e.g., a working week). Each of the experiments that we describe below is run with 5 different traffic scenarios. Each such traffic scenario differs from one another by the initial load of the roads and the designated routes of the agents (cars) in the network. For each such scenario 5 simulations are run, creating a total of 25 simulations for each experiment. It has been shown by Parshani et al. [13, 14] that the information propagation in the network is very efficient when the percentage of gossiping agents is 10% or more. Yet, due to congestion caused by too many cars rushing to what is reported as the less congested part of the network 20-30% of gossiping agents leads to the most efficient routing results in their experiments. Thus, in our simulation, we focus only on simulations in which the percentage of gossip agents is 20%. The simulations were done with different percentages of self-interested agents. To gain statistical significance we ran each simulation with changes in the set of the gossip agents, and the set of the self-interested agents. In order to gain a similar ordinal scale, the results were normalized. The normalized values were calculated by comparing each agent's result to his results when the same scenario was run with no self-interested agents. This was done for all of the iterations. Using the normalized values enabled us to see how worse (or better) each agent would perform compared to the basic setting. For example, if an average journey length of a certain agent in iteration 1 with no selfinterested agent was 50, and the length was 60 in the same scenario and iteration in which self-interested agents were involved, then the normalized value for that agent would be 60/50 = 1.2. More details regarding the simulations are described in Sections 4 and 5. 4. SPREADING LIES, MAXIMIZING UTILITY In the first set of experiments we investigated the benefits achieved by the self-interested agents, whose aim was to minimize their own journey length. The self-interested agents adopted a cheating approach, in which they sent false data to their peers. In this section we first describe the simulations with the self-interested agents. Then, we model the scenario as a The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 329 game with two types of agents, and prove that the equilibrium result can only be achieved when there is no efficient exchange of gossiping information in the network. 4.1 Modeling the Self-Interested Agents' Behavior While the gossip agents gather data and send it to other agents, the self-interested agents' behavior is modeled as follows: 1. Calculate the shortest path from origin to destination. 2. Communicate the following data to other agents: (a) If the road is not in the agent's route - send the true data about it (e.g., data about roads it has received from other agents) (b) For all roads in the agent's route, which the agent has not yet traversed, send a random high weight. Basically, the self-interested agent acts the same as the gossip agent. It collects data regarding the weight of the roads (either by traversing the road or by getting the data from other agents) and sends the data it has collected to other agents. However, the self-interested agent acts differently when the road is in its route. Since the agent's goal is to reach its destination as fast as possible, the agent will falsely report that all the roads in its route are heavily congested. This is in order to free the path for itself, by making other agents recalculate their paths, this time without including roads on the self-interested agent's route. To this end, for all the roads in its route, which the agent has not yet passed, the agent generates a random weight, which is above the average weight of the roads in the network. It then associates these new weights with the roads in its route and sends them to the other agents. While an agent can also divert cars from its route by falsely reporting congested roads in parallel to its route as free, this behavior is not very likely since other agents, attempting to use the roads, will find the mistake within a short time and spread the true congestion on the road. On the other hand, if an agent manages to persuade other agents not to use a road, it will be harder for them to detect that the said roads are not congested. In addition, to avoid being influenced by its own lies and other lies spreading in the network, all self-interested agents will ignore data received about roads with heavy traffic (note that data about roads that are not heavily traffic will not be ignored) 1. In the next subsection we describe the simulation results, involving the self-interested agents. 4.2 Simulation Results To test the benefits of cheating by the self-interested agents we ran several experiments. In the first set of experiments, we created a scenario, in which a small group of self-interested agents spread lies on the same route, and tested its effect on the journey length of all the agents in the network. 1In other simulations we have run, in which there had been several real congestions in the network, we indeed saw that even when the roads are jammed, the self-interested agents were less affected if they ignored all reported heavy traffic, since by such they also discarded all lies roaming the network Table 1: Normalized journey length values, selfinterested agents with the same route Thus, several cars, which had the same origin and destination points, were designated as self-interested agents. In this simulation, we selected only 6 agents to be part of the group of the self-interested agents, as we wanted to investigate the effect achieved by only a small number of agents. In each simulation in this experiment, 6 different agents were randomly chosen to be part of the group of self-interested agents, as described above. In addition, one road, on the route of these agents, was randomly selected to be partially blocked, letting only one car go through that road at each time step. About 8,000 agents were randomly selected as regular gossip agents, and the other 32,000 agents were designated as regular agents. We analyzed the average journey length of the self-interested agents as opposed to the average journey length of other regular gossip agents traveling along the same route. Table 1 summarizes the normalized results for the self-interested agents, the gossip agents (those having the same origin and destination points as the self-interested agents, denoted Gossip - SR, and all other gossip agents, denoted Gossip - Others) and the regular agents, as a function of the iteration number. We can see from the results that the first time the selfinterested agents traveled the route while spreading the false data about the roads did not help them (using the paired t-test we show that those agents had significantly lower journey lengths in the scenario in which they did not spread any lies, with P <0.01). This is mainly due to the fact that the lies do not bypass the self-interested agent and reach other cars that are ahead of the self-interested car on the same route. Thus, spreading the lies in the first iteration does not help the self-interested agent to free the route he is about to travel, in the first iteration. Only when the self-interested agents had repeated their journey in the next iteration (iteration 2) did it help them significantly (P = 0.04). The reason for this is that other gossip agents received this data and used it to recalculate their shortest path, thus avoiding entrance to the roads, for which the self-interested agents had spread false information about congestion. It is also interesting to note the large value attained by the self-interested agents in the first iteration. This is mainly due to several self-interested agents, who entered the jammed road. This situation occurred since the self-interested agents ignored all heavy traffic data, and thus ignored the fact that the road was jammed. As they started spreading lies about this road, more cars shifted from this route, thus making the road free for the future iterations. However, we also recall that the self-interested agents ignore all information about the heavy traffic roads. Thus, 330 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 2: Normalized journey length values, spreading lies for a beneficiary agent when the network becomes congested, more self-interested cars are affected, since they might enter jammed roads, which they would otherwise not have entered. This can be seen, for example, in iterations 4-6, in which the normalized value of the self-interested agents increased above 1.00. Using the paired t-test to compare these values with the values achieved by these agents when no lies are used, we see that there is no significant difference between the two scenarios. As opposed to the gossip agents, we can see how little effect the self-interested agents have on the regular agents. As compared to the gossip agents on the same route that have traveled as much as 193% more, when self-interested agents are introduced, the average journey length for the regular agents has only increased by about 15%. This result is even lower than the effect on other gossip agents in the entire network. Since we noticed that cheating by the self-interested agents does not benefit them in the first iteration, we devised another set of experiments. In the second set of experiments, the self-interested agents have the objective to help another agent, who is supposed to enter the network some time after the self-interested agent entered. We refer to the latter agent as the beneficiary agent. Just like a self-interested agent, the beneficiary agent also ignores all data regarding heavy traffic. In real-life this can be modeled, for example, by a husband, who would like to help his wife find a faster route to her destination. Table 2 summarizes the normalized values for the different agents. As in the first set of experiments, 5 simulations were run for each scenario, with a total of 25 simulations. In each of these simulation one agent was randomly selected as a self-interested agent, and then another agent, with the same origin as the selfinterested agent, was randomly selected as the beneficiary agent. The other 8,000 and 32,000 agents were designated as regular gossip agents and regular agents, respectively. We can see that as the number of iterations advances, the lower the normalized value for the beneficiary agent. In this scenario, just like the previous one, in the first iterations, not only does the beneficiary agent not avoid the jammed roads, since he ignores all heavy traffic, he also does not benefit from the lies spread by the self-interested agent. This is due to the fact that the lies are not yet incorporated by other gossip agents. Thus, if we compare the average journey length in the first iteration when lies are spread and when there are no lies, the average is significantly lower when there are no lies (p <0.03). On the other hand, if we compare the average journey length in all of the iterations, there is no significant difference between the two settings. Still, in most of the iterations, the average journey length of the beneficiary agent is longer than in the case when no lies are spread. We can also see the impact on the other agents in the system. While the gossip agents, which are not on the route of the beneficiary agent, virtually are not affected by the self-interested agent, those on the route and the regular agents are affected and have higher normalized values. That is, even with just one self-interested car, we can see that both the gossip agents that follow the same route as the lies spread by the self-interested agents, and other regular agents, increase their journey length by more than 14%. In our third set of experiments we examined a setting in which there was an increasing number of agents, and the agents did not necessarily have the same origin and destination points. To model this we randomly selected selfinterested agents, whose objective was to minimize their average journey length, assuming the cars were repeating their journeys (that is, more than one iteration was made). As opposed to the first set of experiments, in this set the self-interested agents were selected randomly, and we did not enforce the constraint that they will all have the same origin and destination points. As in the previous sets of experiments we ran 5 different simulations per scenario. In each simulation 11 runs were made, each run with different numbers of self-interested agents: 0 (no self-interested agents), 1, 2, 4, 8, and 16. Each agent adopted the behavior modeled in Section 4.1. Figure 1 shows the normalized value achieved by the self-interested agents as a function of their number. The figure shows these values for iterations 2-6. The first iteration is not shown intentionally, as we assume repeated journeys. Also, we have seen in the previous set of experiments and we have provided explanations as to why the self-interested agents do not gain much from their behavior in the first iteration. Figure 1: Self-interested agents normalized values as a function of the number of self-interested agents. Using these simulations we examined what the threshold could be for the number of randomly selected self-interested agents in order to allow themselves to benefit from their selfish behavior. We can see that up to 8 self-interested agents, the average normalized value is below 1. That is, they benefit from their malicious behavior. In the case of one self-interested agent there is a significant difference between the average journey length of when the agent spread lies and when no lies are spread (p <0.001), while when The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 331 there are 2, 4, 8 and 16 self-interested agents there is no significance difference. Yet, as the number of self-interested agents increases, the normalized value also increases. In such cases, the normalized value is larger than 1, and the self-interested agents journey length becomes significantly higher than their journey length, in cases where there are no self-interested agents in the system. In the next subsection we analyze the scenario as a game and show that when in equilibrium the exchange of gossiping between the agents becomes inefficient. 4.3 When Gossiping is Inefficient We continued and modeled our scenario as a game, in order to find the equilibrium. There are two possible types for the agents: (a) regular gossip agents, and (b) self-interested agents. Each of these agents is a representative of its group, and thus all agents in the same group have similar behavior. We note that the advantage of using gossiping in transportation networks is to allow the agents to detect anomalies in the network (e.g., traffic jams) and to quickly adapt to them by recalculating their routes [14]. We also assume that the objective of the self-interested agents is to minimize their own journey length, thus they spread lies on their routes, as described in Section 4.1. We also assume that sophisticated methods for identifying the self-interested agents or managing reputation are not used. This is mainly due to the complexity of incorporating and maintaining such mechanisms, as well as due to the dynamics of the network, in which interactions between different agents are frequent, agents may leave the network, and data about the road might change as time progresses (e.g., a road might be reported by a regular gossip agent as free at a given time, yet it may currently be jammed due to heavy traffic on the road). Let Tavg be the average time it takes to traverse an edge in the transportation network (that is, the average load of an edge). Let Tmax be the maximum time it takes to traverse an edge. We will investigate the game, in which the self-interested and the regular gossip agents can choose the following actions. The self-interested agents can choose how much to lie, that is, they can choose to spread how long (not necessarily the true duration) it takes to traverse certain roads. Since the objective of the self-interested agents is to spread messages as though some roads are jammed, the traversal time they report is obviously larger than the average time. We denote the time the self-interested agents spread as Ts, such that Tavg <Ts <Tmax. Motivated by the results of the simulations we have described above, we saw that the agents are less affected if they discard the heavy traffic values. Thus, the regular gossip cars, attempting to mitigate the effect of the liars, can choose a strategy to ignore abnormal congestion values above a certain threshold, Tg. Obviously, Tavg <Tg <Tmax. In order to prevent the gossip agents from detecting the lies and just discarding those values, the self-interested agents send lies in a given range, [Ts, Tmax], with an inverse geometric distribution, that is, the higher the T value, the higher its frequency. Now we construct the utility functions for each type of agents, which is defined by the values of Ts and Tg. If the self-interested agents spread traversal times higher than or equal to the regular gossip cars' threshold, they will not benefit from those lies. Thus, the utility value of the selfinterested agents in this case is 0. On the other hand, if the self-interested agents spread traversal time which is lower than the threshold, they will gain a positive utility value. From the regular gossip agents point-of-view, if they accept messages from the self-interested agents, then they incorporate the lies in their calculation, thus they will lose utility points. On the other hand, if they discard the false values the self-interested agents send, that is, they do not incorporate the lies, they will gain utility values. Formally, we use us to denote the utility of the self-interested agents and ug to denote the utility of the regular gossip agents. We also denote the strategy profile in the game as {Ts, Tg}. The utility functions are defined as: We are interested in finding the Nash equilibrium. We recall from [12], that the Nash equilibrium is a strategy profile, where no player has anything to gain by deviating from his strategy, given that the other agent follows his strategy profile. Formally, let (S, u) denote the game, where S is the set of strategy profiles and u is the set of utility functions. When each agent i E {regular gossip, self-interested} chooses a strategy Ti resulting in a strategy profile T = (Ts, Tg) then agent i obtains a utility of ui (T). A strategy profile T' E S is a Nash equilibrium if no deviation in the strategy by any single agent is profitable, that is, if for all i, ui (T')> ui (Ti, T' − i). That is, (Ts, Tg) is a Nash equilibrium if the self-interested agents have no other value Ts ~ such that us (T ~ s, Tg)> us (Ts, Tg), and similarly for the gossip agents. We now have the following theorem. THEOREM 4.1. (Tavg, Tavg) is the only Nash equilibrium. Proof. First we will show that (Tavg, Tavg) is a Nash equilibrium. Assume, by contradiction, that the gossip agents choose another value Tgl> Tavg. Thus, ug (Tavg, Tg ~) = Tavg--Tgl <0. On the other hand, ug (Tavg, Tavg) = 0. Thus, the regular gossip agents have no incentive to deviate from this strategy. The self-interested agents also have no incentive to deviate from this strategy. By contradiction, again assume that the self-interested agents choose another value Ts> Tavg. Thus, us (Ts,, Tavg) = 0, while us (Tavg, Tavg) = 0. We will now show that the above solution is unique. We will show that any other tuple (Ts, Tg), such that Tavg <Tg <Tmax and Tavg <Ts <Tmax is not a Nash equilibrium. We have three cases. In the first Tavg <Tg <Ts <Tmax. Thus, us (Ts, Tg) = 0 and ug (Ts, Tg) = Tg--Tavg. In this case, the regular gossip agents have an incentive to deviate and choose another strategy Tg + 1, since by doing so they increase their own utility: ug (Ts, Tg + 1) = Tg + 1--Tavg. In the second case we have Tavg <Ts <Tg <Tmax. Thus, ug (Ts, Tg) = Ts--Tg <0. Also, the regular gossip agents have an incentive to deviate and choose another strategy Tg--1, in which their utility value is higher: ug (Ts, Tg--1) = Ts--Tg + 1. In the last case we have Tavg <Ts = Tg <Tmax. Thus, us (Ts, Tg) = Ts--Tg = 0. In this case, the self-interested agents have an incentive to deviate and choose another strategy Tg--1, in which their utility value is higher: us (Tg--1, Tg) = Tg--1--Tavg + 1 = Tg--Tavg> 0. 0 Table 3: Normalized journey length values for the first iteration Table 4: Normalized journey length values for all iterations The above theorem proves that the equilibrium point is reached only when the self-interested agents send the time to traverse certain edges equals the average time, and on the other hand the regular gossip agents discard all data regarding roads that are associated with an average time or higher. Thus, for this equilibrium point the exchange of gossiping information between agents is inefficient, as the gossip agents are unable to detect any anomalies in the network. In the next section we describe another scenario for the self-interested agents, in which they are not concerned with their own utility, but rather interested in maximizing the average journey length of other gossip agents. 5. SPREADING LIES, CAUSING CHAOS Another possible behavior that can be adopted by selfinterested agents is characterized by their goal to cause disorder in the network. This can be achieved, for example, by maximizing the average journey length of all agents, even at the cost of maximizing their own journey length. To understand the vulnerability of the gossip based transportation support system, we ran 5 different simulations for each scenario. In each simulation different agents were randomly chosen (using a uniform distribution) to act as gossip agents, among them self-interested agents were chosen. Each self-interested agent behaved in the same manner as described in Section 4.1. Every simulation consisted of 11 runs with each run comprising different numbers of self-interested agents: 0 (no selfinterested agents), 1, 2, 4, 8, 16, 32, 50, 64, 80 and 100. Also, in each run the number of self-interested agents was increased incrementally. For example: the run with 50 selfinterested agents consisted of all the self-interested agents that were used in the run with 32 self-interested agents, but with an additional 18 self-interested agents. Tables 3 and 4 summarize the normalized journey length for the self-interested agents, the regular gossip agents and the regular (non-gossip) agents. Table 3 summarizes the data for the first iteration and Table 4 summarizes the data for the average of all iterations. Figure 2 demonstrates the changes in the normalized values for the regular gossip agents and the regular agents, as a function of the iteration number. Similar to the results in our first set of experiments, described in Section 4.2, we can see that randomly selected self-interested agents who follow different randomly selected routes do not benefit from their malicious behavior (that is, their average journey length does not decrease). However, when only one self-interested agent is involved, it does benefit from the malicious behavior, even in the first iteration. The results also indicate that the regular gossip agents are more sensitive to malicious behavior than regular agents the average journey length for the gossip agents increases significantly (e.g., with 32 self-interested agents the average journey length for the gossip agents was 146% higher than in the setting with no self-interested agents at all, as opposed to an increase of only 25% for the regular agents). In contrast, these results also indicate that the self-interested agents do not succeed in causing a significant load in the network by their malicious behavior. Figure 2: Gossip and regular agents normalized values, as a function of the iteration. Since the goal of the self-interested agents in this case is to cause disorder in the network rather than use the lies for their own benefits, the question arises as to why would the behavior of the self-interested agents be to send lies about their routes only. Furthermore, we hypothesize that if they all send lies about the same major roads the damage they might inflict on the entire network would be larger that had each of them sent lies about its own route. To examine this hypothesis, we designed another set of experiments. In this set of experiments, all the self-interested agents spread lies about the same 13 main roads in the network. However, the results show quite a smaller impact on other gossip and regu Table 5: Normalized journey length values for all iterations. Network with congestions. lar agents in the network. The average normalized value for the gossip agents in these simulations was only about 1.07, as opposed to 1.7 in the original scenario. When analyzing the results we saw that although the false data was spread, it did not cause other gossip cars to change their route. The main reason was that the lies were spread on roads that were not on the route of the self-interested agents. Thus, it took the data longer to reach agents on the main roads, and when the agents reached the relevant roads this data was" too old" to be incorporated in the other agents calculations. We also examined the impact of sending lies in order to cause chaos when there are already congestions in the network. To this end, we simulated a network in which 13 main roads are jammed. The behavior of the self-interested agents is as described in Section 4.1, and the self-interested agents spread lies about their own route. The simulation results, detailed in Table 5, show that there is a greater incentive for the self-interested agents to cheat when the network is already congested, as their cheating causes more damage to the other agents in the network. For example, whereas the average journey length of the regular agents increased only by about 15% in the original scenario, in which the network was not congested, in this scenario the average journey length of the agents had increased by about 60%. 6. CONCLUSIONS In this paper we investigated the benefits achieved by self-interested agents in vehicular networks. Using simulations we investigated two behaviors that might be taken by self-interested agents: (a) trying to minimize their journey length, and (b) trying to cause chaos in the network. Our simulations indicate that in both behaviors the selfinterested agents have only limited success achieving their goal, even if no counter-measures are taken. This is in contrast to the greater impact inflicted by self-interested agents in other domains (e.g., E-Commerce). Some reasons for this are the special characteristics of vehicular networks and their dynamic nature. While the self-interested agents spread lies, they cannot choose which agents with whom they will interact. Also, by the time their lies reach other agents, they might become irrelevant, as more recent data has reached the same agents. Motivated by the simulation results, future research in this field will focus on modeling different behaviors of the self-interested agents, which might cause more damage to the network. Another research direction would be to find ways of minimizing the effect of selfish-agents by using distributed reputation or other measures.
On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks ∗ ABSTRACT As more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads. Indeed, car manufacturers are already equipping their cars with such devices. Though, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas. Nonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers. These car owners will try to manipulate their agents such that they transmit false data to their peers. Using a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented. 1. INTRODUCTION As technology advances, more and more cars are being equipped with devices, which enable them to act as autonomous agents. An important advancement in this respect is the introduction of ad-hoc communication networks (such as Wi-Fi), which enable the exchange of information ∗ This work was supported in part under ISF grant number 8008. between cars, e.g., for locating road congestions [1] and optimal routes [15] or improving traffic safety [2]. Vehicle-To-Vehicle (V2V) communication is already onboard by some car manufactures, enabling the collaboration between different cars on the road. For example, GM's proprietary algorithm [6], called the" threat assessment algorithm", constantly calculates, in real time, other vehicles' positions and speeds, and enables messaging other cars when a collision is imminent; Also, Honda has began testing its system in which vehicles talk with each other and with the highway system itself [7]. In this paper, we investigate the attraction of being a selfish agent in vehicular networks. That is, we investigate the benefits achieved by car owners, who tamper with on-board devices and incorporate their own self-interested agents in them, which act for their benefit. We build on the notion of Gossip Networks, introduced by Shavitt and Shay [15], in which the agents can obtain road congestion information by gossiping with peer agents using ad-hoc communication. We recognize two typical behaviors that the self-interested agents could embark upon, in the context of vehicular networks. In the first behavior, described in Section 4, the objective of the self-interested agents is to maximize their own utility, expressed by their average journey duration on the road. This situation can be modeled in real life by car owners, whose aim is to reach their destination as fast as possible, and would like to have their way free of other cars. To this end they will let their agents cheat the other agents, by injecting false information into the network. This is achieved by reporting heavy traffic values for the roads on their route to other agents in the network in the hope of making the other agents believe that the route is jammed, and causing them to choose a different route. The second type of behavior, described in Section 5, is modeled by the self-interested agents' objective to cause disorder in the network, more than they are interested in maximizing their own utility. This kind of behavior could be generated, for example, by vandalism or terrorists, who aim to cause as much mayhem in the network as possible. We note that the introduction of self-interested agents to the network, would most probably motivate other agents to try and detect these agents in order to minimize their effect. This is similar, though in a different context, to the problem introduced by Lamport et al. [8] as the Byzantine Generals Problem. However, the introduction of mechanisms to deal with self-interested agents is costly and time consuming. In this paper we focus mainly on the attractiveness of selfish behavior by these agents, while we also provide some insights 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS 2. RELATED WORK In their seminal paper, Lamport et al. [8] describe the Byzantine Generals problem, in which processors need to handle malfunctioning components that give conflicting information to different parts of the system. They also present a model in which not all agents are connected, and thus an agent cannot send a message to all the other agents. Dolev et al. [5] has built on this problem and has analyzed the number of faulty agents that can be tolerated in order to eventually reach the right conclusion about true data. Similar work is presented by Minsky et al. [11], who discuss techniques for constructing gossip protocols that are resilient to up to t malicious host failures. As opposed to the above works, our work focuses on vehicular networks, in which the agents are constantly roaming the network and exchanging data. Also, the domain of transportation networks introduces dynamic data, as the load of the roads is subject to change. In addition, the system in transportation networks has a feedback mechanism, since the load in the roads depends on the reports and the movement of the agents themselves. Malkhi et al. [10] present a gossip algorithm for propagating information in a network of processors, in the presence of malicious parties. Their algorithm prevents the spreading of spurious gossip and diffuses genuine data. This is done in time, which is logarithmic in the number of processes and linear in the number of corrupt parties. Nevertheless, their work assumes that the network is static and also that the agents are static (they discuss a network of processors). This is not true for transportation networks. For example, in our model, agents might gossip about heavy traffic load of a specific road, which is currently jammed, yet this information might be false several minutes later, leaving the agents to speculate whether the spreading agents are indeed malicious or not. In addition, as the agents are constantly moving, each agent cannot choose with whom he interacts and exchanges data. In the context of analyzing the data and deciding whether the data is true or not, researchers have focused on distributed reputation systems or decision mechanisms to decide whether or not to share data. Yu and Singh [18] build a social network of agents' reputations. Every agent keeps a list of its neighbors, which can be changed over time, and computes the trustworthiness of other agents by updating the current values of testimonies obtained from reliable referral chains. After a bad experience with another agent every agent decreases the rating of the' bad' agent and propagates this bad experience throughout the network so that other agents can update their ratings accordingly. This approach might be implemented in our domain to allow gossip agents to identify self-interested agents and thus minimize their effect. However, the implementation of such a mechanism is an expensive addition to the infrastructure of autonomous agents in transportation networks. This is mainly due to the dynamic nature of the list of neighbors in transportation networks. Thus, not only does it require maintaining the neighbors' list, since the neighbors change frequently, but it is also harder to build a good reputation system. Leckie et al. [9] focus on the issue of when to share information between the agents in the network. Their domain involves monitoring distributed sensors. Each agent monitors a subset of the sensors and evaluates a hypothesis based on the local measurements of its sensors. If the agent believes that a hypothesis is sufficient likely he exchanges this information with the other agents. In their domain, the goal of all the agents is to reach a global consensus about the likelihood of the hypothesis. In our domain, however, as the agents constantly move, they have many samples, which they exchange with each other. Also, the data might also vary (e.g., a road might be reported as jammed, but a few minutes later it could be free), thus making it harder to decide whether to trust the agent, who sent the data. Moreover, the agent might lie only about a subset of its samples, thus making it even harder to detect his cheating. Some work has been done in the context of gossip networks or transportation networks regarding the spreading of data and its dissemination. Datta et al. [4] focus on information dissemination in mobile ad-hoc networks (MANET). They propose an autonomous gossiping algorithm for an infrastructure-less mobile ad-hoc networking environment. Their autonomous gossiping algorithm uses a greedy mechanism to spread data items in the network. The data items are spread to immediate neighbors that are interested in the information, and avoid ones that are not interested. The decision which node is interested in the information is made by the data item itself, using heuristics. However, their work concentrates on the movement of the data itself, and not on the agents who propagate the data. This is different from our scenario in which each agent maintains the data it has gathered, while the agent itself roams the road and is responsible (and has the capabilities) for spreading the data to other agents in the network. Das et al. [3] propose a cooperative strategy for content delivery in vehicular networks. In their domain, peers download a file from a mesh and exchange pieces of the file among themselves. We, on the other hand, are interested in vehicular networks in which there is no rule forcing the agents to cooperate among themselves. Shibata et al. [16] propose a method for cars to cooperatively and autonomously collect traffic jam statistics to estimate arrival time to destinations for each car. The communication is based on IEEE 802.11, without using a fixed infrastructure on the ground. While we use the same domain, we focus on a different problem. Shibata et al. [16] mainly focus on efficiently broadcasting the data between agents (e.g., avoid duplicates and communication overhead), as we focus on the case where agents are not cooperative in 328 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) nature, and on how selfish agents affect other agents and the network load. Wang et al. [17] also assert, in the context of wireless networks, that individual agents are likely to do what is most beneficial for their owners, and will act selfishly. They design a protocol for communication in networks in which all agents are selfish. Their protocol motivates every agent to maximize its profit only when it behaves truthfully (a mechanism of incentive compatibility). However, the domain of wireless networks is quite different from the domain of transportation networks. In the wireless network, the wireless terminal is required to contribute its local resources to transmit data. Thus, Wang et al. [17] use a payment mechanism, which attaches costs to terminals when transmitting data, and thus enables them to maximize their utility when transmitting data, instead of acting selfishly. Unlike this, in the context of transportation networks, constructing such a mechanism is not quite a straightforward task, as self-interested agents and regular gossip agents might incur the same cost when transmitting data. The difference between the two types of agents only exists regarding the credibility of the data they exchange. In the next section, we will describe our transportation network model and gossiping between the agents. We will also describe the different agents in our system. 3. MODEL AND SIMULATIONS 3.1 Formal Model 3.2 Simulation Design 4. SPREADING LIES, MAXIMIZING UTILITY The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 329 4.1 Modeling the Self-Interested Agents' Behavior 4.2 Simulation Results 330 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 331 4.3 When Gossiping is Inefficient 5. SPREADING LIES, CAUSING CHAOS 6. CONCLUSIONS In this paper we investigated the benefits achieved by self-interested agents in vehicular networks. Using simulations we investigated two behaviors that might be taken by self-interested agents: (a) trying to minimize their journey length, and (b) trying to cause chaos in the network. Our simulations indicate that in both behaviors the selfinterested agents have only limited success achieving their goal, even if no counter-measures are taken. This is in contrast to the greater impact inflicted by self-interested agents in other domains (e.g., E-Commerce). Some reasons for this are the special characteristics of vehicular networks and their dynamic nature. While the self-interested agents spread lies, they cannot choose which agents with whom they will interact. Also, by the time their lies reach other agents, they might become irrelevant, as more recent data has reached the same agents. Motivated by the simulation results, future research in this field will focus on modeling different behaviors of the self-interested agents, which might cause more damage to the network. Another research direction would be to find ways of minimizing the effect of selfish-agents by using distributed reputation or other measures.
On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks ∗ ABSTRACT As more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads. Indeed, car manufacturers are already equipping their cars with such devices. Though, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas. Nonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers. These car owners will try to manipulate their agents such that they transmit false data to their peers. Using a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented. 1. INTRODUCTION As technology advances, more and more cars are being equipped with devices, which enable them to act as autonomous agents. An important advancement in this respect is the introduction of ad-hoc communication networks (such as Wi-Fi), which enable the exchange of information ∗ This work was supported in part under ISF grant number 8008. Vehicle-To-Vehicle (V2V) communication is already onboard by some car manufactures, enabling the collaboration between different cars on the road. In this paper, we investigate the attraction of being a selfish agent in vehicular networks. That is, we investigate the benefits achieved by car owners, who tamper with on-board devices and incorporate their own self-interested agents in them, which act for their benefit. We build on the notion of Gossip Networks, introduced by Shavitt and Shay [15], in which the agents can obtain road congestion information by gossiping with peer agents using ad-hoc communication. We recognize two typical behaviors that the self-interested agents could embark upon, in the context of vehicular networks. In the first behavior, described in Section 4, the objective of the self-interested agents is to maximize their own utility, expressed by their average journey duration on the road. To this end they will let their agents cheat the other agents, by injecting false information into the network. This is achieved by reporting heavy traffic values for the roads on their route to other agents in the network in the hope of making the other agents believe that the route is jammed, and causing them to choose a different route. The second type of behavior, described in Section 5, is modeled by the self-interested agents' objective to cause disorder in the network, more than they are interested in maximizing their own utility. This kind of behavior could be generated, for example, by vandalism or terrorists, who aim to cause as much mayhem in the network as possible. We note that the introduction of self-interested agents to the network, would most probably motivate other agents to try and detect these agents in order to minimize their effect. However, the introduction of mechanisms to deal with self-interested agents is costly and time consuming. In this paper we focus mainly on the attractiveness of selfish behavior by these agents, while we also provide some insights 2. RELATED WORK They also present a model in which not all agents are connected, and thus an agent cannot send a message to all the other agents. Dolev et al. [5] has built on this problem and has analyzed the number of faulty agents that can be tolerated in order to eventually reach the right conclusion about true data. As opposed to the above works, our work focuses on vehicular networks, in which the agents are constantly roaming the network and exchanging data. Also, the domain of transportation networks introduces dynamic data, as the load of the roads is subject to change. In addition, the system in transportation networks has a feedback mechanism, since the load in the roads depends on the reports and the movement of the agents themselves. Malkhi et al. [10] present a gossip algorithm for propagating information in a network of processors, in the presence of malicious parties. Their algorithm prevents the spreading of spurious gossip and diffuses genuine data. Nevertheless, their work assumes that the network is static and also that the agents are static (they discuss a network of processors). This is not true for transportation networks. In addition, as the agents are constantly moving, each agent cannot choose with whom he interacts and exchanges data. In the context of analyzing the data and deciding whether the data is true or not, researchers have focused on distributed reputation systems or decision mechanisms to decide whether or not to share data. Yu and Singh [18] build a social network of agents' reputations. Every agent keeps a list of its neighbors, which can be changed over time, and computes the trustworthiness of other agents by updating the current values of testimonies obtained from reliable referral chains. After a bad experience with another agent every agent decreases the rating of the' bad' agent and propagates this bad experience throughout the network so that other agents can update their ratings accordingly. This approach might be implemented in our domain to allow gossip agents to identify self-interested agents and thus minimize their effect. However, the implementation of such a mechanism is an expensive addition to the infrastructure of autonomous agents in transportation networks. This is mainly due to the dynamic nature of the list of neighbors in transportation networks. Leckie et al. [9] focus on the issue of when to share information between the agents in the network. Their domain involves monitoring distributed sensors. Each agent monitors a subset of the sensors and evaluates a hypothesis based on the local measurements of its sensors. If the agent believes that a hypothesis is sufficient likely he exchanges this information with the other agents. In their domain, the goal of all the agents is to reach a global consensus about the likelihood of the hypothesis. In our domain, however, as the agents constantly move, they have many samples, which they exchange with each other. Also, the data might also vary (e.g., a road might be reported as jammed, but a few minutes later it could be free), thus making it harder to decide whether to trust the agent, who sent the data. Moreover, the agent might lie only about a subset of its samples, thus making it even harder to detect his cheating. Some work has been done in the context of gossip networks or transportation networks regarding the spreading of data and its dissemination. Datta et al. [4] focus on information dissemination in mobile ad-hoc networks (MANET). They propose an autonomous gossiping algorithm for an infrastructure-less mobile ad-hoc networking environment. Their autonomous gossiping algorithm uses a greedy mechanism to spread data items in the network. The data items are spread to immediate neighbors that are interested in the information, and avoid ones that are not interested. The decision which node is interested in the information is made by the data item itself, using heuristics. However, their work concentrates on the movement of the data itself, and not on the agents who propagate the data. This is different from our scenario in which each agent maintains the data it has gathered, while the agent itself roams the road and is responsible (and has the capabilities) for spreading the data to other agents in the network. Das et al. [3] propose a cooperative strategy for content delivery in vehicular networks. In their domain, peers download a file from a mesh and exchange pieces of the file among themselves. We, on the other hand, are interested in vehicular networks in which there is no rule forcing the agents to cooperate among themselves. While we use the same domain, we focus on a different problem. Shibata et al. [16] mainly focus on efficiently broadcasting the data between agents (e.g., avoid duplicates and communication overhead), as we focus on the case where agents are not cooperative in 328 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) nature, and on how selfish agents affect other agents and the network load. Wang et al. [17] also assert, in the context of wireless networks, that individual agents are likely to do what is most beneficial for their owners, and will act selfishly. They design a protocol for communication in networks in which all agents are selfish. Their protocol motivates every agent to maximize its profit only when it behaves truthfully (a mechanism of incentive compatibility). However, the domain of wireless networks is quite different from the domain of transportation networks. In the wireless network, the wireless terminal is required to contribute its local resources to transmit data. Unlike this, in the context of transportation networks, constructing such a mechanism is not quite a straightforward task, as self-interested agents and regular gossip agents might incur the same cost when transmitting data. The difference between the two types of agents only exists regarding the credibility of the data they exchange. In the next section, we will describe our transportation network model and gossiping between the agents. We will also describe the different agents in our system. 6. CONCLUSIONS In this paper we investigated the benefits achieved by self-interested agents in vehicular networks. Using simulations we investigated two behaviors that might be taken by self-interested agents: (a) trying to minimize their journey length, and (b) trying to cause chaos in the network. Our simulations indicate that in both behaviors the selfinterested agents have only limited success achieving their goal, even if no counter-measures are taken. This is in contrast to the greater impact inflicted by self-interested agents in other domains (e.g., E-Commerce). Some reasons for this are the special characteristics of vehicular networks and their dynamic nature. While the self-interested agents spread lies, they cannot choose which agents with whom they will interact. Also, by the time their lies reach other agents, they might become irrelevant, as more recent data has reached the same agents. Motivated by the simulation results, future research in this field will focus on modeling different behaviors of the self-interested agents, which might cause more damage to the network.
J-33
Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions
We investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. We analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory. Network flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms. Experimental trials help distinguish tractable problem classes for proposed solution techniques.
[ "bid", "auction", "multiattribut auction", "constraint", "semant framework", "multiattribut util theori", "global alloc", "prefer", "on-side mechan", "seller valuat function", "partial specif", "combinatori auction", "continu doubl auction" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "U", "M", "M" ]
Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions Yagil Engel, Michael P. Wellman, and Kevin M. Lochner University of Michigan, Computer Science & Engineering 2260 Hayward St, Ann Arbor, MI 48109-2121, USA {yagil,wellman,klochner}@umich. edu ABSTRACT We investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. We analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory. Network flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms. Experimental trials help distinguish tractable problem classes for proposed solution techniques. Categories and Subject Descriptors: F.2 [Theory of Computation]: Analysis Of Algorithms And Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms: Algorithms, Economics 1. BACKGROUND A multiattribute auction is a market-based mechanism where goods are described by vectors of features, or attributes [3, 5, 8, 19]. Such mechanisms provide traders with the ability to negotiate over a multidimensional space of potential deals, delaying commitment to specific configurations until the most promising candidates are identified. For example, in a multiattribute auction for computers, the good may be defined by attributes such as processor speed, memory, and hard disk capacity. Agents have varying preferences (or costs) associated with the possible configurations. For example, a buyer may be willing to purchase a computer with a 2 GHz processor, 500 MB of memory, and a 50 GB hard disk for a price no greater than $500, or the same computer with 1GB of memory for a price no greater than $600. Existing research in multiattribute auctions has focused primarily on one-sided mechanisms, which automate the process whereby a single agent negotiates with multiple potential trading partners [8, 7, 19, 5, 23, 22]. Models of procurement typically assume the buyer has a value function, v, ranging over the possible configurations, X, and that each seller i can similarly be associated with a cost function ci over this domain. The role of the auction is to elicit these functions (possibly approximate or partial versions), and identify the surplus-maximizing deal. In this case, such an outcome would be arg maxi,x v(x) − ci(x). This problem can be translated into the more familiar auction for a single good without attributes by computing a score for each attribute vector based on the seller valuation function, and have buyers bid scores. Analogs of the classic first- and second-price auctions correspond to firstand second-score auctions [8, 7]. In the absence of a published buyer scoring function, agents on both sides may provide partial specifications of the deals they are willing to engage. Research on such auctions has, for example, produced iterative mechanisms for eliciting cost functions incrementally [19]. Other efforts focus on the optimization problem facing the bid taker, for example considering side constraints on the combination of trades comprising an overall deal [4]. Side constraints have also been analyzed in the context of combinatorial auctions [6, 20]. Our emphasis is on two-sided multiattribute auctions, where multiple buyers and sellers submit bids, and the objective is to construct a set of deals maximizing overall surplus. Previous research on such auctions includes works by Fink et al. [12] and Gong [14], both of which consider a matching problem for continuous double auctions (CDAs), where deals are struck whenever a pair of compatible bids is identified. In a call market, in contrast, bids accumulate until designated times (e.g., on a periodic or scheduled basis) at which the auction clears by determining a comprehensive match over the entire set of bids. Because the optimization is performed over an aggregated scope, call markets often enjoy liquidity and efficiency advantages over CDAs [10].1 Clearing a multiattribute CDA is much like clearing a one-sided multiattribute auction. Because nothing happens between bids, the problem is to match a given new bid (say, an offer to buy) with the existing bids on the other (sell) side. Multiattribute call markets are potentially much more complex. Constructing an optimal overall matching may require consideration of many different combina1 In the interim between clears, call markets may also disseminate price quotes providing summary information about the state of the auction [24]. Such price quotes are often computed based on hypothetical clears, and so the clearing algorithm may be invoked more frequently than actual market clearing operations. 110 tions of trades, among the various potential trading-partner pairings. The problem can be complicated by restrictions on overall assignments, as expressed in side constraints [16]. The goal of the present work is to develop a general framework for multiattribute call markets, to enable investigation of design issues and possibilities. In particular, we use the framework to explore tradeoffs between expressive power of agent bids and computational properties of auction clearing. We conduct our exploration independent of any consideration of strategic issues bearing on mechanism design. As with analogous studies of combinatorial auctions [18], we intend that tradeoffs quantified in this work can be combined with incentive factors within a comprehensive overall approach to multiattribute auction design. We provide the formal semantics of multiattribute offers in our framework in the next section. We abstract, where appropriate, from the specific language used to express offers, characterizing expressiveness semantically in terms of what deals may be offered. This enables us to identify some general conditions under which the problem of multilateral matching can be decomposed into bilateral matching problems. We then develop a family of network flow problems that capture corresponding classes of multiattribute call market optimizations. Experimental trials provide preliminary confirmation that the network formulations provide useful structure for implementing clearing algorithms. 2. MULTIATTRIBUTE OFFERS 2.1 Basic Definitions The distinguishing feature of a multiattribute auction is that the goods are defined by vectors of attributes, x = (x1, ... , xm), xj ∈ Xj . A configuration is a particular attribute vector, x ∈ X = Qm j=1 Xj . The outcome of the auction is a set of bilateral trades. Trade t takes the form t = (x, q, b, s, π), signifying that agent b buys q > 0 units of configuration x from seller s, for payment π > 0. For convenience, we use the notation xt to denote the configuration associated with trade t (and similarly for other elements of t). For a set of trades T, we denote by Ti that subset of T involving agent i (i.e., b = i or s = i). Let T denote the set of all possible trades. A bid expresses an agent``s willingness to participate in trades. We specify the semantics of a bid in terms of offer sets. Let OT i ⊆ Ti denote agent i``s trade offer set. Intuitively, this represents the trades in which i is willing to participate. However, since the outcome of the auction is a set of trades, several of which may involve agent i, we must in general consider willingness to engage in trade combinations. Accordingly, we introduce the combination offer set of agent i, OC i ⊆ 2Ti . 2.2 Specifying Offer Sets A fully expressive bid language would allow specification of arbitrary combination offer sets. We instead consider a more limited class which, while restrictive, still captures most forms of multiattribute bidding proposed in the literature. Our bids directly specify part of the agent``s trade offer set, and include further directives controlling how this can be extended to the full trade and combination offer sets. For example, one way to specify a trade (buy) offer set would be to describe a set of configurations and quantities, along with the maximal payment one would exchange for each (x, q) specified. This description could be by enumeration, or any available means of defining such a mapping. An explicit set of trades in the offer set generally entails inclusion of many more implicit trades. We assume payment monotonicity, which means that agents always prefer more money. That is, for π > π > 0, (x, q, i, s, π) ∈ OT i ⇒ (x, q, i, s, π ) ∈ OT i , (x, q, b, i, π ) ∈ OT i ⇒ (x, q, b, i, π) ∈ OT i . We also assume free disposal, which dictates that for all i, q > q > 0, (x, q , i, s, π) ∈ OT i ⇒ (x, q, i, s, π) ∈ OT i , (x, q, b, i, π) ∈ OT i ⇒ (x, q , b, i, π) ∈ OT i . Note that the conditions for agents in the role of buyers and sellers are analogous. Henceforth, for expository simplicity, we present all definitions with respect to buyers only, leaving the definition for sellers as understood. Allowing agents'' bids to comprise offers from both buyer and seller perspectives is also straightforward. An assertion that offers are divisible entails further implicit members in the trade offer set. DEFINITION 1 (DIVISIBLE OFFER). Agent i``s offer is divisible down to q iff ∀q < q < q. (x, q, i, s, π) ∈ OT i ⇒ (x, q , i, s, q q π) ∈ OT i . We employ the shorthand divisible to mean divisible down to 0. The definition above specifies arbitrary divisibility. It would likewise be possible to define divisibility with respect to integers, or to any given finite granularity. Note that when offers are divisible, it suffices to specify one offer corresponding to the maximal quantity one is willing to trade for any given configuration, trading partner, and per-unit payment (called the price). At the extreme of indivisibility are all-or-none offers. DEFINITION 2 (AON OFFER). Agent i``s offer is all-or-none (AON) iff (x, q, i, s, π) ∈ OT i ∧ (x, q , i, s, π ) ∈ OT i ⇒ [q = q ∨ π = π ]. In many cases, the agent will be indifferent with respect to different trading partners. In that event, it may omit the partner element from trades directly specified in its offer set, and simply assert that its offer is anonymous. DEFINITION 3 (ANONYMITY). Agent i``s offer is anonymous iff ∀s, s , b, b . (x, q, i, s, π) ∈ OT i ⇔ (x, q, i, s , π) ∈ OT i ∧ (x, q, b, i, π) ∈ OT i ⇔ (x, q, b , i, π) ∈ OT i . Because omitting trading partner qualifications simplifies the exposition, we generally assume in the following that all offers are anonymous unless explicitly specified otherwise. Extending to the non-anonymous case is conceptually straightforward. We employ the wild-card symbol ∗ in place of an agent identifier to indicate that any agent is acceptable. To specify a trade offer set, a bidder directly specifies a set of willing trades, along with any regularity conditions (e.g., divisibility, anonymity) that implicitly extend the set. The full trade offer set is then defined by the closure of this direct set with respect to payment monotonicity, free disposal, and any applicable divisibility assumptions. We next consider the specification of combination offer sets. Without loss of generality, we restrict each trade set T ∈ OC i to include at most one trade for any combination of configuration and trading partner (multiple such trades are equivalent to one net trade aggregating the quantities and payments). The key question is to what extent the agent is willing to aggregate deals across configurations or trading partners. One possibility is disallowing any aggregation. 111 DEFINITION 4 (NO AGGREGATION). The no-aggregation combinations are given by ONA i = {∅} ∪ {{t} | t ∈ OT i }. Agent i``s offer exhibits non-aggregation iff OC i = ONA i . We require in general that OC i ⊇ ONA i . A more flexible policy is to allow aggregation across trading partners, keeping configuration constant. DEFINITION 5 (PARTNER AGGREGATION). Suppose a particular trade is offered in the same context (set of additional trades, T) with two different sellers, s and s . That is, {(x, q, i, s, π)} ∪ T ∈ OC i ∧ {(x, q, i, s , π)} ∪ T ∈ OC i . Agent i``s offer allows seller aggregation iff in all such cases, {(x, q , i, s, π ), (x, q − q , i, s , π − π )} ∪ T ∈ OC i . In other words, we may create new trade offer combinations by splitting the common trade (quantity and payment, not necessarily proportionately) between the two sellers. In some cases, it might be reasonable to form combinations by aggregating different configurations. DEFINITION 6 (CONFIGURATION AGGREGATION). Suppose agent i offers, in the same context, the same quantity of two (not necessarily different) configurations, x and x . That is, {(x, q, i, ∗, π)} ∪ T ∈ OC i ∧ {(x , q, i, ∗, π )} ∪ T ∈ OC i . Agent i``s offer allows configuration aggregation iff in all such cases (and analogously when it is a seller), {(x, q , i, ∗, q q π), (x , q − q , i, ∗, q − q q π )} ∪ T ∈ OC i . Note that combination offer sets can accommodate offerings of configuration bundles. However, classes of bundles formed by partner or configuration aggregation are highly regular, covering only a specific type of bundle formed by splitting a desired quantity across configurations. This is quite restrictive compared to the general combinatorial case. 2.3 Willingness to Pay An agent``s offer trade set implicitly defines the agent``s willingness to pay for any given configuration and quantity. We assume anonymity to avoid conditioning our definitions on trading partner. DEFINITION 7 (WILLINGNESS TO PAY). Agent i``s willingness to pay for quantity q of configuration x is given by ˆuB i (x, q) = max π s.t. (x, q, i, ∗, π) ∈ OT i . We use the symbol ˆu to recognize that willingness to pay can be viewed as a proxy for the agent``s utility function, measured in monetary units. The superscript B distinguishes the buyer``s willingnessto-pay function from, a seller``s willingness to accept, ˆuS i (x, q), defined as the minimum payment seller i will accept for q units of configuration x. We omit the superscript where the distinction is inessential or clear from context. DEFINITION 8 (TRADE QUANTITY BOUNDS). Agent i``s minimum trade quantity for configuration x is given by qi(x) = min q s.t. ∃π. (x, q, i, ∗, π) ∈ OT i . The agent``s maximum trade quantity for x is ¯qi(x) = max q s.t. ∃π. (x, q, i, ∗, π) ∈ OT i ∧ ¬∃q < q. (x, q , i, ∗, π) ∈ OT i . When the agent has no offers involving x, we take qi(x) = ¯qi(x) = 0. It is useful to define a special case where all configurations are offered in the same quantity range. DEFINITION 9 (CONFIGURATION PARITY). Agent i``s offers exhibit configuration parity iff qi(x) > 0 ∧ qi(x ) > 0 ⇒ qi(x) = qi(x ) ∧ ¯qi(x) = ¯qi(x ). Under configuration parity we drop the arguments from trade quantity bounds, yielding the constants ¯q and q which apply to all offers. DEFINITION 10 (LINEAR PRICING). Agent i``s offers exhibit linear pricing iff for all qi(x) ≤ q ≤ ¯qi(x), ˆui(x, q) = q ¯qi(x) ˆui(x, ¯qi(x)). Note that linear pricing assumes divisibility down to qi(x). Given linear pricing, we can define the unit willingness to pay, ˆui(x) = ˆui(x, ¯qi(x))/¯qi(x), and take ˆui(x, q) = qˆui(x) for all qi(x) ≤ q ≤ ¯qi(x). In general, an agent``s willingness to pay may depend on a context of other trades the agent is engaging in. DEFINITION 11 (WILLINGNESS TO PAY IN CONTEXT). Agent i``s willingness to pay for quantity q of configuration x in the context of other trades T is given by ˆuB i (x, q; T) = max π s.t. {(x, q, i, s, π)} ∪ Ti ∈ OC i . LEMMA 1. If OC i is either non aggregating, or exhibits linear pricing, then ˆuB i (x, q; T) = ˆuB i (x, q). 3. MULTIATTRIBUTE ALLOCATION DEFINITION 12 (TRADE SURPLUS). The surplus of trade t = (x, q, b, s, π) is given by σ(t) = ˆuB b (x, q) − ˆuS s (x, q). Note that the trade surplus does not depend on the payment, which is simply a transfer from buyer to seller. DEFINITION 13 (TRADE UNIT SURPLUS). The unit surplus of trade t = (x, q, b, s, π) is given by σ1 (t) = σ(t)/q. Under linear pricing, we can equivalently write σ1 (t) = ˆuB b (x) − ˆuS s (x). DEFINITION 14 (SURPLUS OF A TRADE IN CONTEXT). The surplus of trade t = (x, q, b, s, π) in the context of other trades T, σ(t; T), is given by ˆuB b (x, q; T) − ˆuS s (x, q; T). DEFINITION 15 (GMAP). The Global Multiattribute Allocation Problem (GMAP) is to find the set of acceptable trades maximizing total surplus, max T ∈2T X t∈T σ(t; T \ {t}) s.t. ∀i. Ti ∈ OC i . DEFINITION 16 (MMP). The Multiattribute Matching Problem (MMP) is to find a best trade for a given pair of traders, MMP(b, s) = arg max t∈OT b ∩OT s σ(t). If OT b ∩ OT s is empty, we say that MMP has no solution. 112 Proofs of all the following results are provided in an extended version of this paper available from the authors. THEOREM 2. Suppose all agents'' offers exhibit no aggregation (Definition 4). Then the solution to GMAP consists of a set of trades, each of which is a solution to MMP for its specified pair of traders. THEOREM 3. Suppose that each agent``s offer set satisfies one of the following (not necessarily the same) sets of conditions. 1. No aggregation and configuration parity (Definitions 4 and 9). 2. Divisibility, linear pricing, and configuration parity (Definitions 1, 10, and 9), with combination offer set defined as the minimal set consistent with configuration aggregation (Definition 6).2 Then the solution to GMAP consists of a set of trades, each of which employs a configuration that solves MMP for its specified pair of traders. Let MMPd (b, s) denote a modified version of MMP, where OT b and OT s are extended to assume divisibility (i.e., the offer sets are taken to be their closures under Definition 1). Then we can extend Theorem 3 to allow aggregating agents to maintain AON or minquantity offers as follows. THEOREM 4. Suppose offer sets as in Theorem 3, except that agents i satisfying configuration aggregation need be divisible only down to qi, rather than down to 0. Then the solution to GMAP consists of a set of trades, each of which employs the same configuration as a solution to MMPd for its specified pair of traders. THEOREM 5. Suppose agents b and s exhibit configuration parity, divisibility, and linear pricing, and there exists configuration x such that ˆub(x) − ˆus(x) > 0. Then t ∈ MMPd (b, s) iff xt = arg max x {ˆub(x) − ˆus(x)} qt = min(¯qb, ¯qs). (1) The preceding results signify that under certain conditions, we can divide the global optimization problem into two parts: first find a bilateral trade that maximizes unit surplus for each pair of traders (or total surplus in the non-aggregation case), and then use the results to find a globally optimal set of trades. In the following two sections we investigate each of these subproblems. 4. UTILITY REPRESENTATION AND MMP We turn next to consider the problem of finding a best deal between pairs of traders. The complexity of MMP depends pivotally on the representation by bids of offer sets, an issue we have postponed to this point. Note that issues of utility representation and MMP apply to a broad class of multiattribute mechanisms, beyond the multiattribute call markets we emphasize. For example, the complexity results contained in this section apply equally to the bidding problem faced by sellers in reverse auctions, given a published buyer scoring function. The simplest representation of an offer set is a direct enumeration of configurations and associated quantities and payments. This approach treats the configurations as atomic entities, making no use 2 That is, for such an agent i, OC i is the closure under configuration aggregation of ONA i . of attribute structure. A common and inexpensive enhancement is to enable a trader to express sets of configurations, by specifying subsets of the domains of component attributes. Associating a single quantity and payment with a set of configurations expresses indifference among them; hence we refer to such a set as an indifference range.3 Indifference ranges include the case of attributes with a natural ordering, in which a bid specifies a minimum or maximum acceptable attribute level. The use of indifference ranges can be convenient for MMP. The compatibility of two indifference ranges is simply found by testing set intersection for each attribute, as demonstrated by the decision-tree algorithm of Fink et al. [12]. Alternatively, bidders may specify willingness-to-pay functions ˆu in terms of compact functional forms. Enumeration based representations, even when enhanced with indifference ranges, are ultimately limited by the exponential size of attribute space. Functional forms may avoid this explosion, but only if ˆu reflects structure among the attributes. Moreover, even given a compact specification of ˆu, we gain computational benefits only if we can perform the matching without expanding the ˆu values of an exponential number of configuration points. 4.1 Additive Forms One particularly useful multiattribute representation is known as the additive scoring function. Though this form is widely used in practice and in the academic literature, it is important to stress the assumptions behind it. The theory of multiattribute representation is best developed in the context where ˆu is interpreted as a utility function representing an underlying preference order [17]. We present the premises of additive utility theory in this section, and discuss some generalizations in the next. DEFINITION 17. A set of attributes Y ⊂ X is preferentially independent (PI) of its complement Z = X \ Y if the conditional preference order over Y given a fixed level Z0 of Z is the same regardless of the choice of Z0 . In other words, the preference order over the projection of X on the attributes in Y is the same for any instantiation of the attributes in Z. DEFINITION 18. X = {x1, ... , xm} is mutually preferentially independent (MPI) if any subset of X is preferentially independent of its complement. THEOREM 6 ([9]). A preference order over set of attributes X has an additive utility function representation u(x1, ... , xm) = mX i=1 ui(xi) iff X is mutually preferential independent. A utility function over outcomes including money is quasi-linear if the function can be represented as a function over non-monetary attributes plus payments, π. Interpreting ˆu as a utility function over non-monetary attributes is tantamount to assuming quasi-linearity. Even when quasi-linearity is assumed, however, MPI over nonmonetary attributes is not sufficient for the quasi-linear utility function to be additive. For this, we also need that each of the pairs (π, Xi) for any attribute Xi would be PI of the rest of the attributes. 3 These should not be mistaken with indifference curves, which express dependency between the attributes. Indifference curves can be expressed by the more elaborate utility representations discussed below. 113 This (by MAUT) in turn implies that the set of attributes including money is MPI and the utility function can be represented as u(x1, ... , xm, π) = mX i=1 ui(xi) + π. Given that form, a willingness-to-pay function reflecting u can be represented additively, as ˆu(x) = mX i=1 ui(xi) In many cases the additivity assumption provides practically crucial simplification of offer set elicitation. In addition to compactness, additivity dramatically simplifies MMP. If both sides provide additive ˆu representations, the globally optimal match reduces to finding the optimal match separately for each attribute. A common scenario in procurement has the buyer define an additive scoring function, while suppliers submit enumerated offer points or indifference ranges. This model is still very amenable to MMP: for each element in a supplier``s enumerated set, we optimize each attribute by finding the point in the supplier``s allowable range that is most preferred by the buyer. A special type of scoring (more particularly, cost) function was defined by Bichler and Kalagnanam [4] and called a configurable offer. This idea is geared towards procurement auctions: assuming suppliers are usually comfortable with expressing their preferences in terms of cost that is quasi-linear in every attribute, they can specify a price for a base offer, and additional cost for every change in a specific attribute level. This model is essentially a pricing out approach [17]. For this case, MMP can still be optimized on a per-attribute basis. A similar idea has been applied to one-sided iterative mechanisms [19], in which sellers refine prices on a perattribute basis at each iteration. 4.2 Multiattribute Utility Theory Under MPI, the tradeoffs between the attributes in each subset cannot be affected by the value of other attributes. For example, when buying a PC, a weaker CPU may increase the importance of the RAM compared to, say, the type of keyboard. Such relationships cannot be expressed under an additive model. Multiattribute utility theory (MAUT) develops various compact representations of utility functions that are based on weaker structural assumptions [17, 2]. There are several challenges in adapting these techniques to multiattribute bidding. First, as noted above, the theory is developed for utility functions, which may behave differently from willingness-to-pay functions. Second, computational efficiency of matching has not been an explicit goal of most work in the area. Third, adapting such representations to iterative mechanisms may be more challenging. One representation that employs somewhat weaker assumptions than additivity, yet retains the summation structure is the generalized additive (GA) decomposition: u(x) = JX j=1 fj(xj ), xj ∈ Xj , (2) where the Xj are potentially overlapping sets of attributes, together exhausting the space X. A key point from our perspective is that the complexity of the matching is similar to the complexity of optimizing a single function, since the sum function is in the form (2) as well. Recent work by Gonzales and Perny [15] provides an elicitation process for GA decomposable preferences under certainty, as well as an optimization algorithm for the GA decomposed function. The complexity of exact optimization is exponential in the induced width of the graph. However, to become operational for multiattribute bidding this decomposition must be detectable and verifiable by statements over preferences with respect to price outcomes. We are exploring this topic in ongoing work [11]. 5. SOLVING GMAP UNDER ALLOCATION CONSTRAINTS Theorems 2, 3, and 4 establish conditions under which GMAP solutions must comprise elements from constituent MMP solutions. In Sections 5.1 and 5.2, we show how to compute these GMAP solutions, given the MMP solutions, under these conditions. In these settings, traders that aggregate partners also aggregate configurations; hence we refer to them simply as aggregating or nonaggregating. Section 5.3 suggests a means to relax the linear pricing restriction employed in these constructions. Section 5.4 provides strategies for allowing traders to aggregate partners and restrict configuration aggregation at the same time. 5.1 Notation and Graphical Representation Our clearing algorithms are based on network flow formulations of the underlying optimization problem [1]. The network model is based on a bipartite graph, in which nodes on the left side represent buyers, and nodes on the right represent sellers. We denote the sets of buyers and sellers by B and S, respectively. We define two graph families, one for the case of non-aggregating traders (called single-unit), and the other for the case of aggregating traders (called multi-unit).4 For both types, a single directed arc is placed from a buyer i ∈ B to a seller j ∈ S if and only if MMP(i, j) is nonempty. We denote by T(i) the set of potential trading partners of trader i (i.e., the nodes connected to buyer or seller i in the bipartite graph. In the single-unit case, we define the weight of an arc (i, j) as wij = σ(MMP(i, j)). Note that free disposal lets a buy offer receive a larger quantity than desired (and similarly for sell offers). For the multi-unit case, the weights are wij = σ1 (MMP(i, j)), and we associate the quantity ¯qi with the node for trader i. We also use the notation qij for the mathematical formulations to denote partial fulfillment of qt for t = MMP(i, j). 5.2 Handling Indivisibility and Aggregation Constraints Under the restrictions of Theorems 2, 3, or 4, and when the solution to MMP is given, GMAP exhibits strong similarity to the problem of clearing double auctions with assignment constraints [16]. A match in our bipartite representation corresponds to a potential trade in which assignment constraints are satisfied. Network flow formulations have been shown to model this problem under the assumption of indivisibility and aggregation for all traders. The novelty in this part of our work is the use of generalized network flow formulations for more complex cases where aggregation and divisibility may be controlled by traders. Initially we examine the simple case of no aggregation (Theorem 2). Observe that the optimal allocation is simply the solution to the well known weighted assignment problem [1] on the singleunit bipartite graph described above. The set of matches that maximizes the total weight of arcs corresponds to the set of trades that maximizes total surplus. Note that any form of (in)divisibility can 4 In the next section, we introduce a hybrid form of graph accommodating mixes of the two trader categories. 114 also be accommodated in this model via the constituent MMP subproblems. The next formulation solves the case in which all traders fall under case 2 of Theorem 3-that is, all traders are aggregating and divisible, and exhibit linear pricing. This case can be represented using the following linear program, corresponding to our multi-unit graph: max X i∈B,j∈S wij qij s.t. X i∈T (j) qij ≤ ¯qj j ∈ S X j∈T (i) qij ≤ ¯qi i ∈ B qij ≥ 0 j ∈ S, i ∈ B Recall that the qij variables in the solution represent the number of units that buyer i procures from seller j. This formulation is known as the network transportation problem with inequality constraints, for which efficient algorithms are available [1]. It is a well known property of the transportation problem (and flow problems on pure networks in general) that given integer input values, the optimal solution is guaranteed to be integer as well. Figure 1 demonstrates the transformation of a set of bids to a transportation problem instance. Figure 1: Multi-unit matching with two boolean attributes. (a) Bids, with offers to buy in the left column and offers to sell at right. q@p indicates an offer to trade q units at price p per unit. Configurations are described in terms of constraints on attribute values. (b) Corresponding multi-unit assignment model. W represents arc weights (unit surplus), s represents source (exogenous) flow, and t represents sink quantity. The problem becomes significantly harder when aggregation is given as an option to bidders, requiring various enhancements to the basic multi-unit bipartite graph described above. In general, we consider traders that are either aggregating or not, with either divisible or AON offers. Initially we examine a special case, which at the same time demonstrates the hardness of the problem but still carries computational advantages. We designate one side (e.g., buyers) as restrictive (AON and non-aggregating), and the other side (sellers) as unrestrictive (divisible and aggregating). This problem can be represented using the following integer programming formulation: max X i∈B,j∈S wij qij s.t. X i∈T (j) ¯qiqij ≤ ¯qj j ∈ S X j∈T (i) qij ≤ 1 i ∈ B qij ∈ {0, 1} j ∈ S, i ∈ B (3) This formulation is a restriction of the generalized assignment problem (GAP) [13]. Although GAP is known to be NP-hard, it can be solved relatively efficiently by exact or approximate algorithms. GAP is more general than the formulation above as it allows buyside quantities (¯qi above) to be different for each potential seller. That this formulation is NP-hard as well (even the case of a single seller corresponds to the knapsack problem), illustrates the drastic increase in complexity when traders with different constraints are admitted to the same problem instance. Other than the special case above, we found no advantage in limiting AON constraints when traders may specify aggregation constraints. Therefore, the next generalization allows any combination of the two boolean constraints, that is, any trader chooses among four bid types: NI Bid AON and not aggregating. AD Bid allows aggregation and divisibility. AI Bid AON, allows aggregation (quantity can be aggregated across configurations, as long as it sums to the whole amount). ND No aggregation, divisibility (one trade, but smaller quantities are acceptable). To formulate an integer programming representation for the problem, we introduce the following variables. Boolean (0/1) variables ri and rj indicate whether buyer i and seller j participate in the solution (used for AON traders). Another indicator variable, yij , applied to non-aggregating buyer i and seller j, is one iff i trades with j. For aggregating traders, yij is not constrained. max X i∈B,j∈S Wij qij (4a) s.t. X j∈T (i) qij = ¯qiri i ∈ AIb (4b) X j∈T (i) qij ≤ ¯qiri i ∈ ADb (4c) X i∈T (j) qij = ¯qirj j ∈ AIs (4d) X i∈T (j) qij ≤ qj rj j ∈ ADs (4e) xij ≤ ¯qiyij i ∈ NDb , j ∈ T(i) (4f) xij ≤ ¯qj yij j ∈ NIs , i ∈ T(j) (4g) X j∈T (i) yij ≤ ri i ∈ NIb ∪ NDb (4h) X i∈T (j) yij ≤ rj j ∈ NIs ∪ NDs (4i) int qij (4j) yij , rj, ri ∈ {0, 1} (4k) 115 Figure 2: Generalized network flow model. B1 is a buyer in AD, B2 ∈ NI, B3 ∈ AI, B4 ∈ ND. V 1 is a seller in ND, V 2 ∈ AI, V 4 ∈ AD. The g values represent arc gains. Problem (4) has additional structure as a generalized min-cost flow problem with integral flow.5 A generalized flow network is a network in which each arc may have a gain factor, in addition to the pure network parameters (which are flow limits and costs). Flow in an arc is then multiplied by its gain factor, so that the flow that enters the end node of an arc equals the flow that entered from its start node, multiplied by the gain factor of the arc. The network model can in turn be translated into an IP formulation that captures such structure. The generalized min-cost flow problem is well-studied and has a multitude of efficient algorithms [1]. The faster algorithms are polynomial in the number of arcs and the logarithm of the maximal gain, that is, performance is not strongly polynomial but is polynomial in the size of the input. The main benefit of this graphical formulation to our matching problem is that it provides a very efficient linear relaxation. Integer programming algorithms such as branch-and-bound use solutions to the linear relaxation instance to bound the optimal integer solution. Since network flow algorithms are much faster than arbitrary linear programs (generalized network flow simplex algorithms have been shown to run in practice only 2 or 3 times slower than pure network min-cost flow [1]), we expect a branch-and-bound solver for the matching problem to show improved performance when taking advantage of network flow modeling. The network flow formulation is depicted in Figure 2. Nonrestrictive traders are treated as in Figure 1. For a non-aggregating buyer, a single unit from the source will saturate up to one of the yij for all j, and be multiplied by ¯qi. If i ∈ ND, the end node of yij will function as a sink that may drain up to ¯qi of the entering flow. For i ∈ NI we use an indicator (0/1) arc ri, on which the flow is multiplied by ¯qi. Trader i trades the full quantity iff ri = 1. At the seller side, the end node of a qij arc functions as a source for sellers j ∈ ND, in order to let the flow through yij arcs be 0 or ¯qj. The flow is then multiplied by 1 ¯qj so 0/1 flows enter an end node which can drain either 1 or 0 units. for sellers j ∈ NI arcs rj ensure AON similarly to arcs rj for buyers. Having established this framework, we are ready to accommo5 Constraint (4j) could be omitted (yielding computational savings) if non-integer quantities are allowed. Here and henceforth we assume the harder problem, where divisibility is with respect to integers. date more flexible versions of side constraints. The first generalization is to replace the boolean AON constraint with divisibility down to q, the minimal quantity. In our network flow instance we simply need to turn the node of the constrained trader i (e.g., the node B3 in Figure 2) to a sink that can drain up to ¯qi − qi units of flow. The integer program (4) can be also easily changed to accommodate this extension. Using gains, we can also apply batch size constraints. If a trader specifies a batch size β, we change the gain on the r arcs to β, and set the available flow of its origin to the maximal number of batches ¯qi/β. 5.3 Nonlinear Pricing A key assumption in handling aggregation up to this point is linear pricing, which enables us to limit attention to a single unit price. Divisibility without linear pricing allows expression of concave willingness-to-pay functions, corresponding to convex preference relations. Bidders may often wish to express non-convex offer sets, for example, due to fixed costs or switching costs in production settings [21]. We consider nonlinear pricing in the form of enumerated payment schedules-that is, defining values ˆu(x, q) for a select set of quantities q. For the indivisible case, these points are distinguished in the offer set by satisfying the following: ∃π. (x, q, i, ∗, π) ∈ OT i ∧ ¬∃q < q. (x, q , i, ∗, π) ∈ OT i . (cf. Definition 8, which defines the maximum quantity, ¯q, as the largest of these.) For the divisible case, the distinguished quantities are those where the unit price changes, which can be formalized similarly. To handle nonlinear pricing, we augment the network to include flow possibilities corresponding to each of the enumerated quantities, plus additional structure to enforce exclusivity among them. In other words, the network treats the offer for a given quantity as in Section 5.2, and embeds this in an XOR relation to ensure that each trader picks only one of these quantities. Since for each such quantity choice we can apply Theorem 3 or 4, the solution we get is in fact the solution to GMAP. The network representation of the XOR relation (which can be embedded into the network of Figure 2) is depicted in Figure 3. For a trader i with K XOR quantity points, we define dummy variables, zk i , k = 1, ... , K. Since we consider trades between every pair of quantity points we also have qk ij , k = 1, ... , K. For buyer i ∈ AI with XOR points at quantities ¯qk i , we replace (4b) with the following constraints: X j∈T (i) qk ij = ¯qk i zk i k = 1, ... , K KX k=1 zk i = ri zk i ∈ {0, 1} k = 1, ... , K (5) 5.4 Homogeneity Constraints The model (4) handles constraints over the aggregation of quantities from different trading partners. When aggregation is allowed, the formulation permits trades involving arbitrary combinations of configurations. A homogeneity constraint [4] restricts such combinations, by requiring that configurations aggregated in an overall deal must agree on some or all attributes. 116 Figure 3: Extending the network flow model to express an XOR over quantities. B2 has 3 XOR points for 6, 3, or 5 units. In the presence of homogeneity constraints, we can no longer apply the convenient separation of GMAP into MMP plus global bipartite optimization, as the solution to GMAP may include trades not part of any MMP solution. For example, let buyer b specify an offer for maximum quantity 10 of various acceptable configurations, with a homogeneity constraint over the attribute color. This means b is willing to aggregate deals over different trading partners and configurations, as long as all are the same color. If seller s can provide 5 blue units or 5 green units, and seller s can provide only 5 green units, we may prefer that b and s trade on green units, even if the local surplus of a blue trade is greater. Let {x1, ... , xH} be attributes that some trader constrains to be homogeneous. To preserve the network flow framework, we need to consider, for each trader, every point in the product domain of these homogeneous attributes. Thus, for every assignment ˆx to the homogeneous attributes, we compute MMP(b, s) under the constraint that configurations are consistent with ˆx. We apply the same approach as in Section 5.3: solve the global optimization, such that the alternative ˆx assignments for each trader are combined under XOR semantics, thus enforcing homogeneity constraints. The size of this network is exponential in the number of homogeneous attributes, since we need a node for each point in the product domain of all the homogeneous attributes of each trader.6 Hence this solution method will only be tractable in applications were the traders can be limited to a small number of homogeneous attributes. It is important to note that the graph needs to include a node only for each point that potentially matches a point of the other side. It is therefore possible to make the problem tractable by limiting one of the sides to a less expressive bidding language, and by that limit the set of potential matches. For example, if sellers submit bounded sets of XOR points, we only need to consider the points in the combined set offered by the sellers, and the reduction to network flow is polynomial regardless of the number of homogeneous attributes. If such simplifications do not apply, it may be preferable to solve the global problem directly as a single optimization problem. We provide the formulation for the special case of divisibility (with respect to integers) and configuration parity. Let i index buyers, j sellers, and H homogeneous attributes. Variable xh ij ∈ Xh represents the value of attribute Xh in the trade between buyer i and seller j. Integer variable qij represents the quantity of the trade (zero for no trade) between i and j. 6 If traders differ on which attributes they express such constraints, we can limit consideration to the relevant alternatives. The complexity will still be exponential, but in the maximum number of homogeneous attributes for any pair of traders. max X i∈B,j∈S [ˆuB i (xij , qij ) − ˆuS j (xij , qij )] X j∈S qij ≤ ¯qi i ∈ B X i∈B qij ≤ ¯qj j ∈ S xh 1j = xh 2j = · · · = x|B|j j ∈ S, h ∈ {1, ... , H} xh i1 = xh i2 = · · · = xi|S| i ∈ B, h ∈ {1, ... , H} (6) Table 1 summarizes the mapping we presented from allocation constraints to the complexity of solving GMAP. Configuration parity is assumed for all cases but the first. 6. EXPERIMENTAL RESULTS We approach the experimental aspect of this work with two objectives. First, we seek a general idea of the sizes and types of clearing problems that can be solved under given time constraints. We also look to compare the performance of a straightforward integer program as in (4) with an integer program that is based on the network formulations developed here. Since we used CPLEX, a commercial optimization tool, the second objective could be achieved to the extent that CPLEX can take advantage of network structure present in a model. We found that in addition to the problem size (in terms of number of traders), the number of aggregating traders plays a crucial role in determining complexity. When most of the traders are aggregating, problems of larger sizes can be solved quickly. For example, our IP model solved instances with 600 buyers and 500 sellers, where 90% of them are aggregating, in less than two minutes. When the aggregating ratio was reduced to 80% for the same data, solution time was just under five minutes. These results motivated us to develop a new network model. Rather than treat non-aggregating traders as a special case, the new model takes advantage of the single-unit nature of non-aggregating trades (treating the aggregating traders as a special case). This new model outperformed our other models on most problem instances, exceptions being those where aggregating traders constitute a vast majority (at least 80%). This new model (Figure 4) has a single node for each non aggregating trader, with a single-unit arc designating a match to another non-aggregating trader. An aggregating trader has a node for each potential match, connected (via y arcs) to a mutual source node. Unlike the previous model we allow fractional flow for this case, representing the traded fraction of the buyer``s total quantity.7 We tested all three models on random data in the form of bipartite graphs encoding MMP solutions. In our experiments, each trader has a maximum quantity uniformly distributed over [30, 70], and minimum quantity uniformly distributed from zero to maximal quantity. Each buyer/seller pair is selected as matching with probability 0.75, with matches assigned a surplus uniformly distributed over [10, 70]. Whereas the size of the problem is defined by the number of traders on each side, the problem complexity depends on the product |B| × |S|. The tests depicted in Figures 5-7 are for the worst case |B| = |S|, with each data point averaged over six samples. In the figures, the direct IP (4) is designated SW, our first network model (Figure 2) NW, and our revised network model (Figure 4) NW 2. 7 Traded quantity remains integer. 117 Aggregation Hom. attr. Divisibility linear pricing Technique Complexity No aggregation N/A Any Not required Assignment problem Polynomial All aggregate None Down to 0 Required Transpor. problem Polynomial One side None Aggr side div. Aggr. side GAP NP-hard Optional None Down to q, batch Required Generalized ntwrk flow NP-hard Optional Bounded Down to q, batch Bounded size schdl. Generalized ntwrk flow NP-hard Optional Not bounded Down to q, batch Not required Nonlinear opt Depends on ˆu(x, q) Table 1: Mapping from combinations of allocation constraints to the solution methods of GMAP. One Side means that one side aggregates and divisible, and the other side is restrictive. Batch means that traders may submit batch sizes. Figure 4: Generalized network flow model. B1 is a buyer in AD, B2 ∈ AI, B3 ∈ NI, B4 ∈ ND. V 1 is a seller in AD, V 2 ∈ AI, V 4 ∈ ND. The g values represent arc gains, and W values represent weights. Figure 5: Average performance of models when 30% of traders aggregate. Figure 6: Average performance of models when 50% of traders aggregate. Figure 7: Average performance of models when 70% of traders aggregate. 118 Figure 8: Performance of models when varying percentage of aggregating traders Figure 8 shows how the various models are affected by a change in the percentage of aggregating traders, holding problem size fixed.8 Due to the integrality constraints, we could not test available algorithms specialized for network-flow problems on our test problems. Thus, we cannot fully evaluate the potential gain attributable to network structure. However, the model we built based on the insight from the network structure clearly provided a significant speedup, even without using a special-purpose algorithm. Model NW 2 provided speedups of a factor of 4-10 over the model SW. This was consistent throughout the problem sizes, including the smaller sizes for which the speedup is not visually apparent on the chart. 7. CONCLUSIONS The implementation and deployment of market exchanges requires the development of bidding languages, information feedback policies, and clearing algorithms that are suitable for the target domain, while paying heed to the incentive properties of the resulting mechanisms. For multiattribute exchanges, the space of feasible such mechanisms is constrained by computational limitations imposed by the clearing process. The extent to which the space of feasible mechanisms may be quantified a priori will facilitate the search for such exchanges in the full mechanism design problem. In this work, we investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We developed a formal semantic framework for characterizing expressible offers, and introduced some basic classes of restrictions. Our key technical results identify sets of conditions under which the overall matching problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. Based on these results, we developed network flow models for the overall clearing problem, which facilitate classification of problem versions by computational complexity, and provide guidance for developing solution algorithms and relaxing bidding constraints. 8. ACKNOWLEDGMENTS This work was supported in part by NSF grant IIS-0205435, and the STIET program under NSF IGERT grant 0114368. We are 8 All tests were performed on Intel 3.4 GHz processors with 2048 KB cache. Test that did not complete by the one-hour time limit were recorded as 4000 seconds. grateful to comments from an anonymous reviewer. Some of the underlying ideas were developed while the first two authors worked at TradingDynamics Inc. and Ariba Inc. in 1999-2001 (cf. US Patent 6,952,682). We thank Yoav Shoham, Kumar Ramaiyer, and Gopal Sundaram for fruitful discussions about multiattribute auctions in that time frame. 9. REFERENCES [1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows. Prentice-Hall, 1993. [2] F. Bacchus and A. Grove. Graphical models for preference and utility. In Eleventh Conference on Uncertainty in Artificial Intelligence, pages 3-10, Montreal, 1995. [3] M. Bichler. The Future of e-Markets: Multi-Dimensional Market Mechanisms. Cambridge U. Press, New York, NY, USA, 2001. [4] M. Bichler and J. Kalagnanam. Configurable offers and winner determination in multi-attribute auctions. European Journal of Operational Research, 160:380-394, 2005. [5] M. Bichler, M. Kaukal, and A. Segev. Multi-attribute auctions for electronic procurement. In Proceedings of the 1st IBM IAC Workshop on Internet Based Negotiation Technologies, 1999. [6] C. Boutilier, T. Sandholm, and R. Shields. Eliciting bid taker non-price preferences in (combinatorial) auctions. In Nineteenth Natl. Conf. on Artificial Intelligence, pages 204-211, San Jose, 2004. [7] F. Branco. The design of multidimensional auctions. RAND Journal of Economics, 28(1):63-81, 1997. [8] Y.-K. Che. Design competition through multidimensional auctions. RAND Journal of Economics, 24(4):668-680, 1993. [9] G. Debreu. Topological methods in cardinal utility theory. In K. Arrow, S. Karlin, and P. Suppes, editors, Mathematical Methods in the Social Sciences. Stanford University Press, 1959. [10] N. Economides and R. A. Schwartz. Electronic call market trading. Journal of Portfolio Management, 21(3), 1995. [11] Y. Engel and M. P. Wellman. Multiattribute utility representation for willingness-to-pay functions. Tech. report, Univ. of Michigan, 2006. [12] E. Fink, J. Johnson, and J. Hu. Exchange market for complex goods: Theory and experiments. Netnomics, 6(1):21-42, 2004. [13] M. L. Fisher, R. Jaikumar, and L. N. Van Wassenhove. A multiplier adjustment method for the generalized assignment problem. Management Science, 32(9):1095-1103, 1986. [14] J. Gong. Exchanges for complex commodities: Search for optimal matches. Master``s thesis, University of South Florida, 2002. [15] C. Gonzales and P. Perny. GAI networks for decision making under certainty. In IJCAI-05 workshop on preferences, Edinburgh, 2005. [16] J. R. Kalagnanam, A. J. Davenport, and H. S. Lee. Computational aspects of clearing continuous call double auctions with assignment constraints and indivisible demand. Electronic Commerce Research, 1(3):221-238, 2001. [17] R. L. Keeney and H. Raiffa. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Wiley, 1976. [18] N. Nisan. Bidding and allocation in combinatorial auctions. In Second ACM Conference on Electronic Commerce, pages 1-12, Minneapolis, MN, 2000. [19] D. C. Parkes and J. Kalagnanam. Models for iterative multiattribute procurement auctions. Management Science, 51:435-451, 2005. [20] T. Sandholm and S. Suri. Side constraints and non-price attributes in markets. In IJCAI-01 Workshop on Distributed Constraint Reasoning, Seattle, 2001. [21] L. J. Schvartzman and M. P. Wellman. Market-based allocation with indivisible bids. In AAMAS-05 Workshop on Agent-Mediated Electronic Commerce, Utrecht, 2005. [22] J. Shachat and J. T. Swarthout. Procurement auctions for differentiated goods. Technical Report 0310004, Economics Working Paper Archive at WUSTL, Oct. 2003. [23] A. V. Sunderam and D. C. Parkes. Preference elicitation in proxied multiattribute auctions. In Fourth ACM Conference on Electronic Commerce, pages 214-215, San Diego, 2003. [24] P. R. Wurman, M. P. Wellman, and W. E. Walsh. A parametrization of the auction design space. Games and Economic Behavior, 35:304-338, 2001. 119
Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions ABSTRACT We investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. We analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory. Network flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms. Experimental trials help distinguish tractable problem classes for proposed solution techniques. 1. BACKGROUND A multiattribute auction is a market-based mechanism where goods are described by vectors of features, or attributes [3, 5, 8, 19]. Such mechanisms provide traders with the ability to negotiate over a multidimensional space of potential deals, delaying commitment to specific configurations until the most promising candidates are identified. For example, in a multiattribute auction for computers, the good may be defined by attributes such as processor speed, memory, and hard disk capacity. Agents have varying preferences (or costs) associated with the possible configurations. For example, a buyer may be willing to purchase a computer with a 2 GHz processor, 500 MB of memory, and a 50 GB hard disk for a price no greater than $500, or the same computer with 1GB of memory for a price no greater than $600. Existing research in multiattribute auctions has focused primarily on one-sided mechanisms, which automate the process whereby a single agent negotiates with multiple potential trading partners [8, 7, 19, 5, 23, 22]. Models of procurement typically assume the buyer has a value function, v, ranging over the possible configurations, X, and that each seller i can similarly be associated with a cost function ci over this domain. The role of the auction is to elicit these functions (possibly approximate or partial versions), and identify the surplus-maximizing deal. In this case, such an outcome would be arg maxi, x v (x) − ci (x). This problem can be translated into the more familiar auction for a single good without attributes by computing a score for each attribute vector based on the seller valuation function, and have buyers bid scores. Analogs of the classic first - and second-price auctions correspond to firstand second-score auctions [8, 7]. In the absence of a published buyer scoring function, agents on both sides may provide partial specifications of the deals they are willing to engage. Research on such auctions has, for example, produced iterative mechanisms for eliciting cost functions incrementally [19]. Other efforts focus on the optimization problem facing the bid taker, for example considering side constraints on the combination of trades comprising an overall deal [4]. Side constraints have also been analyzed in the context of combinatorial auctions [6, 20]. Our emphasis is on two-sided multiattribute auctions, where multiple buyers and sellers submit bids, and the objective is to construct a set of deals maximizing overall surplus. Previous research on such auctions includes works by Fink et al. [12] and Gong [14], both of which consider a matching problem for continuous double auctions (CDAs), where deals are struck whenever a pair of compatible bids is identified. In a call market, in contrast, bids accumulate until designated times (e.g., on a periodic or scheduled basis) at which the auction clears by determining a comprehensive match over the entire set of bids. Because the optimization is performed over an aggregated scope, call markets often enjoy liquidity and efficiency advantages over CDAs [10].1 Clearing a multiattribute CDA is much like clearing a one-sided multiattribute auction. Because nothing happens between bids, the problem is to match a given new bid (say, an offer to buy) with the existing bids on the other (sell) side. Multiattribute call markets are potentially much more complex. Constructing an optimal overall matching may require consideration of many different combina1In the interim between clears, call markets may also disseminate price quotes providing summary information about the state of the auction [24]. Such price quotes are often computed based on hypothetical clears, and so the clearing algorithm may be invoked more frequently than actual market clearing operations. tions of trades, among the various potential trading-partner pairings. The problem can be complicated by restrictions on overall assignments, as expressed in side constraints [16]. The goal of the present work is to develop a general framework for multiattribute call markets, to enable investigation of design issues and possibilities. In particular, we use the framework to explore tradeoffs between expressive power of agent bids and computational properties of auction clearing. We conduct our exploration independent of any consideration of strategic issues bearing on mechanism design. As with analogous studies of combinatorial auctions [18], we intend that tradeoffs quantified in this work can be combined with incentive factors within a comprehensive overall approach to multiattribute auction design. We provide the formal semantics of multiattribute offers in our framework in the next section. We abstract, where appropriate, from the specific language used to express offers, characterizing expressiveness semantically in terms of what deals may be offered. This enables us to identify some general conditions under which the problem of multilateral matching can be decomposed into bilateral matching problems. We then develop a family of network flow problems that capture corresponding classes of multiattribute call market optimizations. Experimental trials provide preliminary confirmation that the network formulations provide useful structure for implementing clearing algorithms. 2. MULTIATTRIBUTE OFFERS 2.1 Basic Definitions The distinguishing feature of a multiattribute auction is that the goods are defined by vectors of attributes, x = (x1,..., xm), xj ∈ Xj. A configuration is a particular attribute vector, x ∈ X = Qmj = 1 Xj. The outcome of the auction is a set of bilateral trades. Trade t takes the form t = (x, q, b, s, π), signifying that agent b buys q> 0 units of configuration x from seller s, for payment π> 0. For convenience, we use the notation xt to denote the configuration associated with trade t (and similarly for other elements of t). For a set of trades T, we denote by Ti that subset of T involving agent i (i.e., b = i or s = i). Let T denote the set of all possible trades. A bid expresses an agent's willingness to participate in trades. We specify the semantics of a bid in terms of offer sets. Let OTi ⊆ Ti denote agent i's trade offer set. Intuitively, this represents the trades in which i is willing to participate. However, since the outcome of the auction is a set of trades, several of which may involve agent i, we must in general consider willingness to engage in trade combinations. Accordingly, we introduce the combination offer set of agent i, OCi ⊆ 2Ti. 2.2 Specifying Offer Sets A fully expressive bid language would allow specification of arbitrary combination offer sets. We instead consider a more limited class which, while restrictive, still captures most forms of multiattribute bidding proposed in the literature. Our bids directly specify part of the agent's trade offer set, and include further directives controlling how this can be extended to the full trade and combination offer sets. For example, one way to specify a trade (buy) offer set would be to describe a set of configurations and quantities, along with the maximal payment one would exchange for each (x, q) specified. This description could be by enumeration, or any available means of defining such a mapping. An explicit set of trades in the offer set generally entails inclusion of many more implicit trades. We assume payment monotonicity, We also assume free disposal, which dictates that for all i, q> q'> 0, (x, q', i, s, π) ∈ OTi ⇒ (x, q, i, s, π) ∈ OTi, (x, q, b, i, π) ∈ OTi ⇒ (x, q', b, i, π) ∈ OTi. Note that the conditions for agents in the role of buyers and sellers are analogous. Henceforth, for expository simplicity, we present all definitions with respect to buyers only, leaving the definition for sellers as understood. Allowing agents' bids to comprise offers from both buyer and seller perspectives is also straightforward. An assertion that offers are divisible entails further implicit members in the trade offer set. DEFINITION 1 (DIVISIBLE OFFER). Agent i's offer is divisible down to q iff ∀ q <q' <q. (x, q, i, s, π) ∈ OTi ⇒ (x, q', i, s, q ~ q π) ∈ OTi. We employ the shorthand divisible to mean divisible down to 0. The definition above specifies arbitrary divisibility. It would likewise be possible to define divisibility with respect to integers, or to any given finite granularity. Note that when offers are divisible, it suffices to specify one offer corresponding to the maximal quantity one is willing to trade for any given configuration, trading partner, and per-unit payment (called the price). At the extreme of indivisibility are all-or-none offers. In many cases, the agent will be indifferent with respect to different trading partners. In that event, it may omit the partner element from trades directly specified in its offer set, and simply assert that its offer is anonymous. Because omitting trading partner qualifications simplifies the exposition, we generally assume in the following that all offers are anonymous unless explicitly specified otherwise. Extending to the non-anonymous case is conceptually straightforward. We employ the wild-card symbol ∗ in place of an agent identifier to indicate that any agent is acceptable. To specify a trade offer set, a bidder directly specifies a set of willing trades, along with any regularity conditions (e.g., divisibility, anonymity) that implicitly extend the set. The full trade offer set is then defined by the closure of this direct set with respect to payment monotonicity, free disposal, and any applicable divisibility assumptions. We next consider the specification of combination offer sets. Without loss of generality, we restrict each trade set T ∈ OCi to include at most one trade for any combination of configuration and trading partner (multiple such trades are equivalent to one net trade aggregating the quantities and payments). The key question is to what extent the agent is willing to aggregate deals across configurations or trading partners. One possibility is disallowing any aggregation. DEFINITION 4 (NO AGGREGATION). The no-aggregation combinations are given by ONAi = {∅} ∪ {{t} | t ∈ OTi}. Agent i's offer exhibits non-aggregation iff OCi = ONA A more flexible policy is to allow aggregation across trading partners, keeping configuration constant. In other words, we may create new trade offer combinations by splitting the common trade (quantity and payment, not necessarily proportionately) between the two sellers. In some cases, it might be reasonable to form combinations by aggregating different configurations. Agent i's offer allows configuration aggregation iff in all such cases (and analogously when it is a seller), {(x, q', i, ∗, q' q π), (x', q − q', i, ∗, q − q' π')} ∪ T ∈ OCi. q Note that combination offer sets can accommodate offerings of configuration bundles. However, classes of bundles formed by partner or configuration aggregation are highly regular, covering only a specific type of bundle formed by splitting a desired quantity across configurations. This is quite restrictive compared to the general combinatorial case. 2.3 Willingness to Pay An agent's offer trade set implicitly defines the agent's willingness to pay for any given configuration and quantity. We assume anonymity to avoid conditioning our definitions on trading partner. We use the symbol uˆ to recognize that willingness to pay can be viewed as a proxy for the agent's utility function, measured in monetary units. The superscript B distinguishes the buyer's willingnessto-pay function from, a seller's willingness to accept, ˆuSi (x, q), defined as the minimum payment seller i will accept for q units of configuration x. We omit the superscript where the distinction is inessential or clear from context. DEFINITION 8 (TRADE QUANTITY BOUNDS). Agent i's minimum trade quantity for configuration x is given by When the agent has no offers involving x, we take qi (x) = ¯ qi (x) = 0. It is useful to define a special case where all configurations are offered in the same quantity range. DEFINITION 9 (CONFIGURATION PARITY). Agent i's offers exhibit configuration parity iff Under configuration parity we drop the arguments from trade quantity bounds, yielding the constants q ¯ and q which apply to all offers. Note that linear pricing assumes divisibility down to qi (x). Given linear pricing, we can define the unit willingness to pay, ˆui (x) = ˆui (x, ¯ qi (x)) / ¯ qi (x), and take ˆui (x, q) = qˆui (x) for all qi (x) ≤ q ≤ ¯ qi (x). In general, an agent's willingness to pay may depend on a context of other trades the agent is engaging in. DEFINITION 11 (WILLINGNESS TO PAY IN CONTEXT). Agent i's willingness to pay for quantity q of configuration x in the context of other trades T is given by Note that the trade surplus does not depend on the payment, which is simply a transfer from buyer to seller. Proofs of all the following results are provided in an extended version of this paper available from the authors. 1. No aggregation and configuration parity (Definitions 4 and 9). 2. Divisibility, linear pricing, and configuration parity (Definitions 1, 10, and 9), with combination offer set defined as the minimal set consistent with configuration aggregation (Definition 6).2 Then the solution to GMAP consists of a set of trades, each of which employs a configuration that solves MMP for its specified pair of traders. Let MMPd (b, s) denote a modified version of MMP, where OTb and OT3 are extended to assume divisibility (i.e., the offer sets are taken to be their closures under Definition 1). Then we can extend Theorem 3 to allow aggregating agents to maintain AON or minquantity offers as follows. The preceding results signify that under certain conditions, we can divide the global optimization problem into two parts: first find a bilateral trade that maximizes unit surplus for each pair of traders (or total surplus in the non-aggregation case), and then use the results to find a globally optimal set of trades. In the following two sections we investigate each of these subproblems. 4. UTILITY REPRESENTATION AND MMP We turn next to consider the problem of finding a best deal between pairs of traders. The complexity of MMP depends pivotally on the representation by bids of offer sets, an issue we have postponed to this point. Note that issues of utility representation and MMP apply to a broad class of multiattribute mechanisms, beyond the multiattribute call markets we emphasize. For example, the complexity results contained in this section apply equally to the bidding problem faced by sellers in reverse auctions, given a published buyer scoring function. The simplest representation of an offer set is a direct enumeration of configurations and associated quantities and payments. This approach treats the configurations as atomic entities, making no use aggregation of ONA i. of attribute structure. A common and inexpensive enhancement is to enable a trader to express sets of configurations, by specifying subsets of the domains of component attributes. Associating a single quantity and payment with a set of configurations expresses indifference among them; hence we refer to such a set as an indifference range .3 Indifference ranges include the case of attributes with a natural ordering, in which a bid specifies a minimum or maximum acceptable attribute level. The use of indifference ranges can be convenient for MMP. The compatibility of two indifference ranges is simply found by testing set intersection for each attribute, as demonstrated by the decision-tree algorithm of Fink et al. [12]. Alternatively, bidders may specify willingness-to-pay functions uˆ in terms of compact functional forms. Enumeration based representations, even when enhanced with indifference ranges, are ultimately limited by the exponential size of attribute space. Functional forms may avoid this explosion, but only if uˆ reflects structure among the attributes. Moreover, even given a compact specification of ˆu, we gain computational benefits only if we can perform the matching without expanding the uˆ values of an exponential number of configuration points. 4.1 Additive Forms One particularly useful multiattribute representation is known as the additive scoring function. Though this form is widely used in practice and in the academic literature, it is important to stress the assumptions behind it. The theory of multiattribute representation is best developed in the context where uˆ is interpreted as a utility function representing an underlying preference order [17]. We present the premises of additive utility theory in this section, and discuss some generalizations in the next. In other words, the preference order over the projection of X on the attributes in Y is the same for any instantiation of the attributes in Z. A utility function over outcomes including money is quasi-linear if the function can be represented as a function over non-monetary attributes plus payments, π. Interpreting uˆ as a utility function over non-monetary attributes is tantamount to assuming quasi-linearity. Even when quasi-linearity is assumed, however, MPI over nonmonetary attributes is not sufficient for the quasi-linear utility function to be additive. For this, we also need that each of the pairs (π, Xi) for any attribute Xi would be PI of the rest of the attributes. 3These should not be mistaken with indifference curves, which express dependency between the attributes. Indifference curves can be expressed by the more elaborate utility representations discussed below. This (by MAUT) in turn implies that the set of attributes including money is MPI and the utility function can be represented as Given that form, a willingness-to-pay function reflecting u can be represented additively, as In many cases the additivity assumption provides practically crucial simplification of offer set elicitation. In addition to compactness, additivity dramatically simplifies MMP. If both sides provide additive uˆ representations, the globally optimal match reduces to finding the optimal match separately for each attribute. A common scenario in procurement has the buyer define an additive scoring function, while suppliers submit enumerated offer points or indifference ranges. This model is still very amenable to MMP: for each element in a supplier's enumerated set, we optimize each attribute by finding the point in the supplier's allowable range that is most preferred by the buyer. A special type of scoring (more particularly, cost) function was defined by Bichler and Kalagnanam [4] and called a configurable offer. This idea is geared towards procurement auctions: assuming suppliers are usually comfortable with expressing their preferences in terms of cost that is quasi-linear in every attribute, they can specify a price for a base offer, and additional cost for every change in a specific attribute level. This model is essentially a "pricing out" approach [17]. For this case, MMP can still be optimized on a per-attribute basis. A similar idea has been applied to one-sided iterative mechanisms [19], in which sellers refine prices on a perattribute basis at each iteration. 4.2 Multiattribute Utility Theory Under MPI, the tradeoffs between the attributes in each subset cannot be affected by the value of other attributes. For example, when buying a PC, a weaker CPU may increase the importance of the RAM compared to, say, the type of keyboard. Such relationships cannot be expressed under an additive model. Multiattribute utility theory (MAUT) develops various compact representations of utility functions that are based on weaker structural assumptions [17, 2]. There are several challenges in adapting these techniques to multiattribute bidding. First, as noted above, the theory is developed for utility functions, which may behave differently from willingness-to-pay functions. Second, computational efficiency of matching has not been an explicit goal of most work in the area. Third, adapting such representations to iterative mechanisms may be more challenging. One representation that employs somewhat weaker assumptions than additivity, yet retains the summation structure is the generalized additive (GA) decomposition: where the Xj are potentially overlapping sets of attributes, together exhausting the space X. A key point from our perspective is that the complexity of the matching is similar to the complexity of optimizing a single function, since the sum function is in the form (2) as well. Recent work by Gonzales and Perny [15] provides an elicitation process for GA decomposable preferences under certainty, as well as an optimization algorithm for the GA decomposed function. The complexity of exact optimization is exponential in the induced width of the graph. However, to become operational for multiattribute bidding this decomposition must be detectable and verifiable by statements over preferences with respect to price outcomes. We are exploring this topic in ongoing work [11]. 5. SOLVING GMAP UNDER ALLOCATION CONSTRAINTS Theorems 2, 3, and 4 establish conditions under which GMAP solutions must comprise elements from constituent MMP solutions. In Sections 5.1 and 5.2, we show how to compute these GMAP solutions, given the MMP solutions, under these conditions. In these settings, traders that aggregate partners also aggregate configurations; hence we refer to them simply as "aggregating" or "nonaggregating". Section 5.3 suggests a means to relax the linear pricing restriction employed in these constructions. Section 5.4 provides strategies for allowing traders to aggregate partners and restrict configuration aggregation at the same time. 5.1 Notation and Graphical Representation Our clearing algorithms are based on network flow formulations of the underlying optimization problem [1]. The network model is based on a bipartite graph, in which nodes on the left side represent buyers, and nodes on the right represent sellers. We denote the sets of buyers and sellers by B and S, respectively. We define two graph families, one for the case of non-aggregating traders (called single-unit), and the other for the case of aggregating traders (called multi-unit).4 For both types, a single directed arc is placed from a buyer i ∈ B to a seller j ∈ S if and only if MMP (i, j) is nonempty. We denote by T (i) the set of potential trading partners of trader i (i.e., the nodes connected to buyer or seller i in the bipartite graph. In the single-unit case, we define the weight of an arc (i, j) as wi, = σ (MMP (i, j)). Note that free disposal lets a buy offer receive a larger quantity than desired (and similarly for sell offers). For the multi-unit case, the weights are wi, = σ1 (MMP (i, j)), and we associate the quantity ¯ qi with the node for trader i. We also use the notation qi, for the mathematical formulations to denote partial fulfillment of qt for t = MMP (i, j). 5.2 Handling Indivisibility and Aggregation Constraints Under the restrictions of Theorems 2, 3, or 4, and when the solution to MMP is given, GMAP exhibits strong similarity to the problem of clearing double auctions with assignment constraints [16]. A match in our bipartite representation corresponds to a potential trade in which assignment constraints are satisfied. Network flow formulations have been shown to model this problem under the assumption of indivisibility and aggregation for all traders. The novelty in this part of our work is the use of generalized network flow formulations for more complex cases where aggregation and divisibility may be controlled by traders. Initially we examine the simple case of no aggregation (Theorem 2). Observe that the optimal allocation is simply the solution to the well known weighted assignment problem [1] on the singleunit bipartite graph described above. The set of matches that maximizes the total weight of arcs corresponds to the set of trades that maximizes total surplus. Note that any form of (in) divisibility can 4In the next section, we introduce a hybrid form of graph accommodating mixes of the two trader categories. also be accommodated in this model via the constituent MMP subproblems. The next formulation solves the case in which all traders fall under case 2 of Theorem 3--that is, all traders are aggregating and divisible, and exhibit linear pricing. This case can be represented using the following linear program, corresponding to our multi-unit graph: Recall that the qij variables in the solution represent the number of units that buyer i procures from seller j. This formulation is known as the network transportation problem with inequality constraints, for which efficient algorithms are available [1]. It is a well known property of the transportation problem (and flow problems on pure networks in general) that given integer input values, the optimal solution is guaranteed to be integer as well. Figure 1 demonstrates the transformation of a set of bids to a transportation problem instance. Figure 1: Multi-unit matching with two boolean attributes. (a) Bids, with offers to buy in the left column and offers to sell at right. q@p indicates an offer to trade q units at price p per unit. Configurations are described in terms of constraints on attribute values. (b) Corresponding multi-unit assignment model. W represents arc weights (unit surplus), s represents source (exogenous) flow, and t represents sink quantity. The problem becomes significantly harder when aggregation is given as an option to bidders, requiring various enhancements to the basic multi-unit bipartite graph described above. In general, we consider traders that are either aggregating or not, with either divisible or AON offers. Initially we examine a special case, which at the same time demonstrates the hardness of the problem but still carries computational advantages. We designate one side (e.g., buyers) as restrictive (AON and non-aggregating), and the other side (sellers) as unrestrictive (divisible and aggregating). This problem can be represented using the following integer programming formulation: This formulation is a restriction of the generalized assignmentproblem (GAP) [13]. Although GAP is known to be NP-hard, it can be solved relatively efficiently by exact or approximate algorithms. GAP is more general than the formulation above as it allows buyside quantities (¯ qi above) to be different for each potential seller. That this formulation is NP-hard as well (even the case of a single seller corresponds to the knapsack problem), illustrates the drastic increase in complexity when traders with different constraints are admitted to the same problem instance. Other than the special case above, we found no advantage in limiting AON constraints when traders may specify aggregation constraints. Therefore, the next generalization allows any combination of the two boolean constraints, that is, any trader chooses among four bid types: NI Bid AON and not aggregating. AD Bid allows aggregation and divisibility. AI Bid AON, allows aggregation (quantity can be aggregated across configurations, as long as it sums to the whole amount). ND No aggregation, divisibility (one trade, but smaller quantities are acceptable). To formulate an integer programming representation for the problem, we introduce the following variables. Boolean (0/1) variables ri and rj' indicate whether buyer i and seller j participate in the solution (used for AON traders). Another indicator variable, yij, applied to non-aggregating buyer i and seller j, is one iff i trades with j. For aggregating traders, yij is not constrained. Figure 2: Generalized network flow model. B1 is a buyer in AD, B2 E NI, B3 E AI, B4 E ND. V 1 is a seller in ND, V 2 E AI, V 4 E AD. The g values represent arc gains. Problem (4) has additional structure as a generalized min-cost flow problem with integral flow .5 A generalized flow network is a network in which each arc may have a gain factor, in addition to the pure network parameters (which are flow limits and costs). Flow in an arc is then multiplied by its gain factor, so that the flow that enters the end node of an arc equals the flow that entered from its start node, multiplied by the gain factor of the arc. The network model can in turn be translated into an IP formulation that captures such structure. The generalized min-cost flow problem is well-studied and has a multitude of efficient algorithms [1]. The faster algorithms are polynomial in the number of arcs and the logarithm of the maximal gain, that is, performance is not strongly polynomial but is polynomial in the size of the input. The main benefit of this graphical formulation to our matching problem is that it provides a very efficient linear relaxation. Integer programming algorithms such as branch-and-bound use solutions to the linear relaxation instance to bound the optimal integer solution. Since network flow algorithms are much faster than arbitrary linear programs (generalized network flow simplex algorithms have been shown to run in practice only 2 or 3 times slower than pure network min-cost flow [1]), we expect a branch-and-bound solver for the matching problem to show improved performance when taking advantage of network flow modeling. The network flow formulation is depicted in Figure 2. Nonrestrictive traders are treated as in Figure 1. For a non-aggregating buyer, a single unit from the source will saturate up to one of the yi, for all j, and be multiplied by ¯ qi. If i E ND, the end node of yi, will function as a sink that may drain up to ¯ qi of the entering flow. For i E NI we use an indicator (0/1) arc ri, on which the flow is multiplied by ¯ qi. Trader i trades the full quantity iff ri = 1. At the seller side, the end node of a qi, arc functions as a source for sellers j E ND, in order to let the flow through y' i, arcs be 0 or ¯ qj. The flow is then multiplied by ¯ q,1 so 0/1 flows enter an end node which can drain either 1 or 0 units. for sellers j E NI arcs r' j ensure AON similarly to arcs rj for buyers. Having established this framework, we are ready to accommo5Constraint (4j) could be omitted (yielding computational savings) if non-integer quantities are allowed. Here and henceforth we assume the harder problem, where divisibility is with respect to integers. date more flexible versions of side constraints. The first generalization is to replace the boolean AON constraint with divisibility down to q, the minimal quantity. In our network flow instance we simply need to turn the node of the constrained trader i (e.g., the node B3 in Figure 2) to a sink that can drain up to ¯ qi--qi units of flow. The integer program (4) can be also easily changed to accommodate this extension. Using gains, we can also apply batch size constraints. If a trader specifies a batch size β, we change the gain on the r arcs to β, and set the available flow of its origin to the maximal number of batches ¯ qi / β. 5.3 Nonlinear Pricing A key assumption in handling aggregation up to this point is linear pricing, which enables us to limit attention to a single unit price. Divisibility without linear pricing allows expression of concave willingness-to-pay functions, corresponding to convex preference relations. Bidders may often wish to express non-convex offer sets, for example, due to fixed costs or switching costs in production settings [21]. We consider nonlinear pricing in the form of enumerated payment schedules--that is, defining values ˆu (x, q) for a select set of quantities q. For the indivisible case, these points are distinguished in the offer set by satisfying the following: (cf. Definition 8, which defines the maximum quantity, ¯ q, as the largest of these.) For the divisible case, the distinguished quantities are those where the unit price changes, which can be formalized similarly. To handle nonlinear pricing, we augment the network to include flow possibilities corresponding to each of the enumerated quantities, plus additional structure to enforce exclusivity among them. In other words, the network treats the offer for a given quantity as in Section 5.2, and embeds this in an XOR relation to ensure that each trader picks only one of these quantities. Since for each such quantity choice we can apply Theorem 3 or 4, the solution we get is in fact the solution to GMAP. The network representation of the XOR relation (which can be embedded into the network of Figure 2) is depicted in Figure 3. For a trader i with K XOR quantity points, we define dummy variables, zki, k = 1,..., K. Since we consider trades between every pair of quantity points we also have qk i,, k = 1,..., K. For buyer i E AI with XOR points at quantities ¯ qki, we replace (4b) with the following constraints: 5.4 Homogeneity Constraints The model (4) handles constraints over the aggregation of quantities from different trading partners. When aggregation is allowed, the formulation permits trades involving arbitrary combinations of configurations. A homogeneity constraint [4] restricts such combinations, by requiring that configurations aggregated in an overall deal must agree on some or all attributes. Figure 3: Extending the network flow model to express an XOR over quantities. B2 has 3 XOR points for 6, 3, or 5 units. In the presence of homogeneity constraints, we can no longer apply the convenient separation of GMAP into MMP plus global bipartite optimization, as the solution to GMAP may include trades not part of any MMP solution. For example, let buyer b specify an offer for maximum quantity 10 of various acceptable configurations, with a homogeneity constraint over the attribute "color". This means b is willing to aggregate deals over different trading partners and configurations, as long as all are the same color. If seller s can provide 5 blue units or 5 green units, and seller s ~ can provide only 5 green units, we may prefer that b and s trade on green units, even if the local surplus of a blue trade is greater. Let {x1,..., xH} be attributes that some trader constrains to be homogeneous. To preserve the network flow framework, we need to consider, for each trader, every point in the product domain of these homogeneous attributes. Thus, for every assignment xˆ to the homogeneous attributes, we compute MMP (b, s) under the constraint that configurations are consistent with ˆx. We apply the same approach as in Section 5.3: solve the global optimization, such that the alternative xˆ assignments for each trader are combined under XOR semantics, thus enforcing homogeneity constraints. The size of this network is exponential in the number of homogeneous attributes, since we need a node for each point in the product domain of all the homogeneous attributes of each trader .6 Hence this solution method will only be tractable in applications were the traders can be limited to a small number of homogeneous attributes. It is important to note that the graph needs to include a node only for each point that potentially matches a point of the other side. It is therefore possible to make the problem tractable by limiting one of the sides to a less expressive bidding language, and by that limit the set of potential matches. For example, if sellers submit bounded sets of XOR points, we only need to consider the points in the combined set offered by the sellers, and the reduction to network flow is polynomial regardless of the number of homogeneous attributes. If such simplifications do not apply, it may be preferable to solve the global problem directly as a single optimization problem. We provide the formulation for the special case of divisibility (with respect to integers) and configuration parity. Let i index buyers, j sellers, and H homogeneous attributes. Variable xhij E Xh represents the value of attribute Xh in the trade between buyer i and seller j. Integer variable qij represents the quantity of the trade (zero for no trade) between i and j. 6If traders differ on which attributes they express such constraints, we can limit consideration to the relevant alternatives. The complexity will still be exponential, but in the maximum number of homogeneous attributes for any pair of traders. Xmax [ˆuBi (xij, qij) - ˆuSj (xij, qij)] Table 1 summarizes the mapping we presented from allocation constraints to the complexity of solving GMAP. Configuration parity is assumed for all cases but the first. 6. EXPERIMENTAL RESULTS We approach the experimental aspect of this work with two objectives. First, we seek a general idea of the sizes and types of clearing problems that can be solved under given time constraints. We also look to compare the performance of a straightforward integer program as in (4) with an integer program that is based on the network formulations developed here. Since we used CPLEX, a commercial optimization tool, the second objective could be achieved to the extent that CPLEX can take advantage of network structure present in a model. We found that in addition to the problem size (in terms of number of traders), the number of aggregating traders plays a crucial role in determining complexity. When most of the traders are aggregating, problems of larger sizes can be solved quickly. For example, our IP model solved instances with 600 buyers and 500 sellers, where 90% of them are aggregating, in less than two minutes. When the aggregating ratio was reduced to 80% for the same data, solution time was just under five minutes. These results motivated us to develop a new network model. Rather than treat non-aggregating traders as a special case, the new model takes advantage of the single-unit nature of non-aggregating trades (treating the aggregating traders as a special case). This new model outperformed our other models on most problem instances, exceptions being those where aggregating traders constitute a vast majority (at least 80%). This new model (Figure 4) has a single node for each non aggregating trader, with a single-unit arc designating a match to another non-aggregating trader. An aggregating trader has a node for each potential match, connected (via y arcs) to a mutual source node. Unlike the previous model we allow fractional flow for this case, representing the traded fraction of the buyer's total quantity .7 We tested all three models on random data in the form of bipartite graphs encoding MMP solutions. In our experiments, each trader has a maximum quantity uniformly distributed over [30, 70], and minimum quantity uniformly distributed from zero to maximal quantity. Each buyer/seller pair is selected as matching with probability 0.75, with matches assigned a surplus uniformly distributed over [10, 70]. Whereas the size of the problem is defined by the number of traders on each side, the problem complexity depends on the product B x S. The tests depicted in Figures 5--7 are for the worst case B = S, with each data point averaged over six samples. In the figures, the direct IP (4) is designated "SW", our first network model (Figure 2) "NW", and our revised network model (Figure 4) "NW 2". Table 1: Mapping from combinations of allocation constraints to the solution methods of GMAP. One Side means that one side aggregates and divisible, and the other side is restrictive. Batch means that traders may submit batch sizes. Figure 4: Generalized network flow model. B1 is a buyer in AD, B2 ∈ AI, B3 ∈ NI, B4 ∈ ND. V 1 is a seller in AD, V 2 ∈ AI, V 4 ∈ ND. The g values represent arc gains, and W values represent weights. Figure 6: Average performance of models when 50% of traders aggregate. Figure 5: Average performance of models when 30% of traders aggregate. Figure 7: Average performance of models when 70% of traders aggregate. Figure 8: Performance of models when varying percentage of aggregating traders Figure 8 shows how the various models are affected by a change in the percentage of aggregating traders, holding problem size fixed .8 Due to the integrality constraints, we could not test available algorithms specialized for network-flow problems on our test problems. Thus, we cannot fully evaluate the potential gain attributable to network structure. However, the model we built based on the insight from the network structure clearly provided a significant speedup, even without using a special-purpose algorithm. Model NW 2 provided speedups of a factor of 4--10 over the model SW. This was consistent throughout the problem sizes, including the smaller sizes for which the speedup is not visually apparent on the chart. 7. CONCLUSIONS The implementation and deployment of market exchanges requires the development of bidding languages, information feedback policies, and clearing algorithms that are suitable for the target domain, while paying heed to the incentive properties of the resulting mechanisms. For multiattribute exchanges, the space of feasible such mechanisms is constrained by computational limitations imposed by the clearing process. The extent to which the space of feasible mechanisms may be quantified a priori will facilitate the search for such exchanges in the full mechanism design problem. In this work, we investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We developed a formal semantic framework for characterizing expressible offers, and introduced some basic classes of restrictions. Our key technical results identify sets of conditions under which the overall matching problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. Based on these results, we developed network flow models for the overall clearing problem, which facilitate classification of problem versions by computational complexity, and provide guidance for developing solution algorithms and relaxing bidding constraints.
Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions ABSTRACT We investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. We analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory. Network flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms. Experimental trials help distinguish tractable problem classes for proposed solution techniques. 1. BACKGROUND A multiattribute auction is a market-based mechanism where goods are described by vectors of features, or attributes [3, 5, 8, 19]. Such mechanisms provide traders with the ability to negotiate over a multidimensional space of potential deals, delaying commitment to specific configurations until the most promising candidates are identified. For example, in a multiattribute auction for computers, the good may be defined by attributes such as processor speed, memory, and hard disk capacity. Agents have varying preferences (or costs) associated with the possible configurations. For example, a buyer may be willing to purchase a computer with a 2 GHz processor, 500 MB of memory, and a 50 GB hard disk for a price no greater than $500, or the same computer with 1GB of memory for a price no greater than $600. Existing research in multiattribute auctions has focused primarily on one-sided mechanisms, which automate the process whereby a single agent negotiates with multiple potential trading partners [8, 7, 19, 5, 23, 22]. Models of procurement typically assume the buyer has a value function, v, ranging over the possible configurations, X, and that each seller i can similarly be associated with a cost function ci over this domain. The role of the auction is to elicit these functions (possibly approximate or partial versions), and identify the surplus-maximizing deal. In this case, such an outcome would be arg maxi, x v (x) − ci (x). This problem can be translated into the more familiar auction for a single good without attributes by computing a score for each attribute vector based on the seller valuation function, and have buyers bid scores. Analogs of the classic first - and second-price auctions correspond to firstand second-score auctions [8, 7]. In the absence of a published buyer scoring function, agents on both sides may provide partial specifications of the deals they are willing to engage. Research on such auctions has, for example, produced iterative mechanisms for eliciting cost functions incrementally [19]. Other efforts focus on the optimization problem facing the bid taker, for example considering side constraints on the combination of trades comprising an overall deal [4]. Side constraints have also been analyzed in the context of combinatorial auctions [6, 20]. Our emphasis is on two-sided multiattribute auctions, where multiple buyers and sellers submit bids, and the objective is to construct a set of deals maximizing overall surplus. Previous research on such auctions includes works by Fink et al. [12] and Gong [14], both of which consider a matching problem for continuous double auctions (CDAs), where deals are struck whenever a pair of compatible bids is identified. In a call market, in contrast, bids accumulate until designated times (e.g., on a periodic or scheduled basis) at which the auction clears by determining a comprehensive match over the entire set of bids. Because the optimization is performed over an aggregated scope, call markets often enjoy liquidity and efficiency advantages over CDAs [10].1 Clearing a multiattribute CDA is much like clearing a one-sided multiattribute auction. Because nothing happens between bids, the problem is to match a given new bid (say, an offer to buy) with the existing bids on the other (sell) side. Multiattribute call markets are potentially much more complex. Constructing an optimal overall matching may require consideration of many different combina1In the interim between clears, call markets may also disseminate price quotes providing summary information about the state of the auction [24]. Such price quotes are often computed based on hypothetical clears, and so the clearing algorithm may be invoked more frequently than actual market clearing operations. tions of trades, among the various potential trading-partner pairings. The problem can be complicated by restrictions on overall assignments, as expressed in side constraints [16]. The goal of the present work is to develop a general framework for multiattribute call markets, to enable investigation of design issues and possibilities. In particular, we use the framework to explore tradeoffs between expressive power of agent bids and computational properties of auction clearing. We conduct our exploration independent of any consideration of strategic issues bearing on mechanism design. As with analogous studies of combinatorial auctions [18], we intend that tradeoffs quantified in this work can be combined with incentive factors within a comprehensive overall approach to multiattribute auction design. We provide the formal semantics of multiattribute offers in our framework in the next section. We abstract, where appropriate, from the specific language used to express offers, characterizing expressiveness semantically in terms of what deals may be offered. This enables us to identify some general conditions under which the problem of multilateral matching can be decomposed into bilateral matching problems. We then develop a family of network flow problems that capture corresponding classes of multiattribute call market optimizations. Experimental trials provide preliminary confirmation that the network formulations provide useful structure for implementing clearing algorithms. 2. MULTIATTRIBUTE OFFERS 2.1 Basic Definitions 2.2 Specifying Offer Sets 2.3 Willingness to Pay 4. UTILITY REPRESENTATION AND MMP 4.1 Additive Forms 4.2 Multiattribute Utility Theory 5. SOLVING GMAP UNDER ALLOCATION CONSTRAINTS 5.1 Notation and Graphical Representation 5.2 Handling Indivisibility and Aggregation Constraints 5.3 Nonlinear Pricing 5.4 Homogeneity Constraints 6. EXPERIMENTAL RESULTS 7. CONCLUSIONS The implementation and deployment of market exchanges requires the development of bidding languages, information feedback policies, and clearing algorithms that are suitable for the target domain, while paying heed to the incentive properties of the resulting mechanisms. For multiattribute exchanges, the space of feasible such mechanisms is constrained by computational limitations imposed by the clearing process. The extent to which the space of feasible mechanisms may be quantified a priori will facilitate the search for such exchanges in the full mechanism design problem. In this work, we investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We developed a formal semantic framework for characterizing expressible offers, and introduced some basic classes of restrictions. Our key technical results identify sets of conditions under which the overall matching problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. Based on these results, we developed network flow models for the overall clearing problem, which facilitate classification of problem versions by computational complexity, and provide guidance for developing solution algorithms and relaxing bidding constraints.
Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions ABSTRACT We investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. We analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory. Network flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms. Experimental trials help distinguish tractable problem classes for proposed solution techniques. 1. BACKGROUND A multiattribute auction is a market-based mechanism where goods are described by vectors of features, or attributes [3, 5, 8, 19]. Such mechanisms provide traders with the ability to negotiate over a multidimensional space of potential deals, delaying commitment to specific configurations until the most promising candidates are identified. For example, in a multiattribute auction for computers, the good may be defined by attributes such as processor speed, memory, and hard disk capacity. Agents have varying preferences (or costs) associated with the possible configurations. Existing research in multiattribute auctions has focused primarily on one-sided mechanisms, which automate the process whereby a single agent negotiates with multiple potential trading partners [8, 7, 19, 5, 23, 22]. The role of the auction is to elicit these functions (possibly approximate or partial versions), and identify the surplus-maximizing deal. This problem can be translated into the more familiar auction for a single good without attributes by computing a score for each attribute vector based on the seller valuation function, and have buyers bid scores. Analogs of the classic first - and second-price auctions correspond to firstand second-score auctions [8, 7]. In the absence of a published buyer scoring function, agents on both sides may provide partial specifications of the deals they are willing to engage. Research on such auctions has, for example, produced iterative mechanisms for eliciting cost functions incrementally [19]. Other efforts focus on the optimization problem facing the bid taker, for example considering side constraints on the combination of trades comprising an overall deal [4]. Side constraints have also been analyzed in the context of combinatorial auctions [6, 20]. Our emphasis is on two-sided multiattribute auctions, where multiple buyers and sellers submit bids, and the objective is to construct a set of deals maximizing overall surplus. Because the optimization is performed over an aggregated scope, call markets often enjoy liquidity and efficiency advantages over CDAs [10].1 Clearing a multiattribute CDA is much like clearing a one-sided multiattribute auction. Because nothing happens between bids, the problem is to match a given new bid (say, an offer to buy) with the existing bids on the other (sell) side. Multiattribute call markets are potentially much more complex. Constructing an optimal overall matching may require consideration of many different combina1In the interim between clears, call markets may also disseminate price quotes providing summary information about the state of the auction [24]. Such price quotes are often computed based on hypothetical clears, and so the clearing algorithm may be invoked more frequently than actual market clearing operations. The problem can be complicated by restrictions on overall assignments, as expressed in side constraints [16]. The goal of the present work is to develop a general framework for multiattribute call markets, to enable investigation of design issues and possibilities. In particular, we use the framework to explore tradeoffs between expressive power of agent bids and computational properties of auction clearing. We conduct our exploration independent of any consideration of strategic issues bearing on mechanism design. As with analogous studies of combinatorial auctions [18], we intend that tradeoffs quantified in this work can be combined with incentive factors within a comprehensive overall approach to multiattribute auction design. We provide the formal semantics of multiattribute offers in our framework in the next section. We abstract, where appropriate, from the specific language used to express offers, characterizing expressiveness semantically in terms of what deals may be offered. This enables us to identify some general conditions under which the problem of multilateral matching can be decomposed into bilateral matching problems. We then develop a family of network flow problems that capture corresponding classes of multiattribute call market optimizations. Experimental trials provide preliminary confirmation that the network formulations provide useful structure for implementing clearing algorithms. 7. CONCLUSIONS For multiattribute exchanges, the space of feasible such mechanisms is constrained by computational limitations imposed by the clearing process. The extent to which the space of feasible mechanisms may be quantified a priori will facilitate the search for such exchanges in the full mechanism design problem. In this work, we investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We developed a formal semantic framework for characterizing expressible offers, and introduced some basic classes of restrictions. Our key technical results identify sets of conditions under which the overall matching problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. Based on these results, we developed network flow models for the overall clearing problem, which facilitate classification of problem versions by computational complexity, and provide guidance for developing solution algorithms and relaxing bidding constraints.
I-76
Negotiation by Abduction and Relaxation
This paper studies a logical framework for automated negotiation between two agents. We suppose an agent who has a knowledge base represented by a logic program. Then, we introduce methods of constructing counter-proposals in response to proposals made by an agent. To this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases. These techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation. We provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals.
[ "negoti", "relax", "autom negoti", "logic program", "extend abduct", "condit propos", "multi-agent system", "on-to-on negoti", "altern propos", "specif meta-knowledg", "abduct framework", "abduct program", "drop condit", "anti-instanti", "induct gener", "minim explan", "integr constraint" ]
[ "P", "P", "P", "P", "P", "P", "U", "M", "M", "U", "R", "R", "M", "U", "U", "U", "U" ]
Negotiation by Abduction and Relaxation Chiaki Sakama Dept. Computer and Communication Sciences Wakayama University Sakaedani, Wakayama 640 8510, Japan sakama@sys.wakayama-u.ac.jp Katsumi Inoue National Institute of Informatics 2-1-2 Hitotsubashi, Chiyoda-ku Tokyo 101 8430, Japan ki@nii.ac.jp ABSTRACT This paper studies a logical framework for automated negotiation between two agents. We suppose an agent who has a knowledge base represented by a logic program. Then, we introduce methods of constructing counter-proposals in response to proposals made by an agent. To this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases. These techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation. We provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals. Categories and Subject Descriptors F.4.1 [Mathematical Logic]: Logic and constraint programming;; I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Theory 1. INTRODUCTION Automated negotiation has been received increasing attention in multi-agent systems, and a number of frameworks have been proposed in different contexts ([1, 2, 3, 5, 10, 11, 13, 14], for instance). Negotiation usually proceeds in a series of rounds and each agent makes a proposal at every round. An agent that received a proposal responds in two ways. One is a critique which is a remark as to whether or not (parts of) the proposal is accepted. The other is a counter-proposal which is an alternative proposal made in response to a previous proposal [13]. To see these proposals in one-to-one negotiation, suppose the following negotiation dialogue between a buyer agent B and a seller agent S. (Bi (or Si) represents an utterance of B (or S) in the i-th round.) B1: I want to buy a personal computer of the brand b1, with the specification of CPU:1GHz, Memory:512MB, HDD: 80GB, and a DVD-RW driver. I want to get it at the price under 1200 USD. S1: We can provide a PC with the requested specification if you pay for it by cash. In this case, however, service points are not added for this special discount. B2: I cannot pay it by cash. S2: In a normal price, the requested PC costs 1300 USD. B3: I cannot accept the price. My budget is under 1200 USD. S3: We can provide another computer with the requested specification, except that it is made by the brand b2. The price is exactly 1200 USD. B4: I do not want a PC of the brand b2. Instead, I can downgrade a driver from DVD-RW to CD-RW in my initial proposal. S4: Ok, I accept your offer. In this dialogue, in response to the opening proposal B1, the counter-proposal S1 is returned. In the rest of the dialogue, B2, B3, S4 are critiques, while S2, S3, B4 are counterproposals. Critiques are produced by evaluating a proposal in a knowledge base of an agent. In contrast, making counter-proposals involves generating an alternative proposal which is more favorable to the responding agent than the original one. It is known that there are two ways of producing counterproposals: extending the initial proposal or amending part of the initial proposal. According to [13], the first type appears in the dialogue: A: I propose that you provide me with service X. B: I propose that I provide you with service X if you provide me with service Z. The second type is in the dialogue: A: I propose that I provide you with service Y if you provide me with service X. B: I propose that I provide you with service X if you provide me with service Z. A negotiation proceeds by iterating such give-andtake dialogues until it reaches an agreement/disagreement. In those dialogues, agents generate (counter-)proposals by reasoning on their own goals or objectives. The objective of the agent A in the above dialogues is to obtain service X. The agent B proposes conditions to provide the service. In the process of negotiation, however, it may happen that agents are obliged to weaken or change their initial goals to reach a negotiated compromise. In the dialogue of 1022 978-81-904262-7-5 (RPS) c 2007 IFAAMAS a buyer agent and a seller agent presented above, a buyer agent changes its initial goal by downgrading a driver from DVD-RW to CD-RW. Such behavior is usually represented as specific meta-knowledge of an agent or specified as negotiation protocols in particular problems. Currently, there is no computational logic for automated negotiation which has general inference rules for producing (counter-)proposals. The purpose of this paper is to mechanize a process of building (counter-)proposals in one-to-one negotiation dialogues. We suppose an agent who has a knowledge base represented by a logic program. We then introduce methods for generating three different types of proposals. First, we use the technique of extended abduction in artificial intelligence [8, 15] to construct a conditional proposal as an extension of the original one. Second, we use the technique of relaxation in cooperative query answering for databases [4, 6] to construct a neighborhood proposal as an amendment of the original one. Third, combining extended abduction and relaxation, conditional neighborhood proposals are constructed as amended extensions of the original proposal. We develop a negotiation protocol between two agents based on the exchange of these counter-proposals and critiques. We also provide procedures for computing proposals in logic programming. This paper is organized as follows. Section 2 introduces a logical framework used in this paper. Section 3 presents methods for constructing proposals, and provides a negotiation protocol. Section 4 provides methods for computing proposals in logic programming. Section 5 discusses related works, and Section 6 concludes the paper. 2. PRELIMINARIES Logic programs considered in this paper are extended disjunctive programs (EDP) [7]. An EDP (or simply a program) is a set of rules of the form: L1 ; · · · ; Ll ← Ll+1 , ... , Lm, not Lm+1 , ... , not Ln (n ≥ m ≥ l ≥ 0) where each Li is a positive/negative literal, i.e., A or ¬A for an atom A, and not is negation as failure (NAF). not L is called an NAF-literal. The symbol ; represents disjunction. The left-hand side of the rule is the head, and the right-hand side is the body. For each rule r of the above form, head(r), body+ (r) and body− (r) denote the sets of literals {L1, ... , Ll}, {Ll+1, ... , Lm}, and {Lm+1, ... , Ln}, respectively. Also, not body− (r) denotes the set of NAF-literals {not Lm+1, ... , not Ln}. A disjunction of literals and a conjunction of (NAF-)literals in a rule are identified with its corresponding sets of literals. A rule r is often written as head(r) ← body+ (r), not body− (r) or head(r) ← body(r) where body(r) = body+ (r)∪not body− (r). A rule r is disjunctive if head(r) contains more than one literal. A rule r is an integrity constraint if head(r) = ∅; and r is a fact if body(r) = ∅. A program is NAF-free if no rule contains NAF-literals. Two rules/literals are identified with respect to variable renaming. A substitution is a mapping from variables to terms θ = {x1/t1, ... , xn/tn}, where x1, ... , xn are distinct variables and each ti is a term distinct from xi. Given a conjunction G of (NAF-)literals, Gθ denotes the conjunction obtained by applying θ to G. A program, rule, or literal is ground if it contains no variable. A program P with variables is a shorthand of its ground instantiation Ground(P), the set of ground rules obtained from P by substituting variables in P by elements of its Herbrand universe in every possible way. The semantics of an EDP is defined by the answer set semantics [7]. Let Lit be the set of all ground literals in the language of a program. Suppose a program P and a set of literals S(⊆ Lit). Then, the reduct P S is the program which contains the ground rule head(r) ← body+ (r) iff there is a rule r in Ground(P) such that body− (r)∩S = ∅. Given an NAF-free EDP P, Cn(P) denotes the smallest set of ground literals which is (i) closed under P, i.e., for every ground rule r in Ground(P), body(r) ⊆ Cn(P) implies head(r) ∩ Cn(P) = ∅; and (ii) logically closed, i.e., it is either consistent or equal to Lit. Given an EDP P and a set S of literals, S is an answer set of P if S = Cn(P S ). A program has none, one, or multiple answer sets in general. An answer set is consistent if it is not Lit. A program P is consistent if it has a consistent answer set; otherwise, P is inconsistent. Abductive logic programming [9] introduces a mechanism of hypothetical reasoning to logic programming. An abductive framework used in this paper is the extended abduction introduced by Inoue and Sakama [8, 15]. An abductive program is a pair P, H where P is an EDP and H is a set of literals called abducibles. When a literal L ∈ H contains variables, any instance of L is also an abducible. An abductive program P, H is consistent if P is consistent. Throughout the paper, abductive programs are assumed to be consistent unless stated otherwise. Let G = L1, ... , Lm, not Lm+1, ... , not Ln be a conjunction, where all variables in G are existentially quantified at the front and range-restricted, i.e., every variable in Lm+1, ... , Ln appears in L1, ... , Lm. A set S of ground literals satisfies the conjunction G if { L1θ, ... , Lmθ } ⊆ S and { Lm+1θ, ... , Lnθ }∩ S = ∅ for some ground instance Gθ with a substitution θ. Let P, H be an abductive program and G a conjunction as above. A pair (E, F) is an explanation of an observation G in P, H if1 1. (P \ F) ∪ E has an answer set which satisfies G, 2. (P \ F) ∪ E is consistent, 3. E and F are sets of ground literals such that E ⊆ H\P and F ⊆ H ∩ P. When (P \ F) ∪ E has an answer set S satisfying the above three conditions, S is called a belief set of an abductive program P, H satisfying G (with respect to (E, F)). Note that if P has a consistent answer set S satisfying G, S is also a belief set of P, H satisfying G with respect to (E, F) = (∅, ∅). Extended abduction introduces/removes hypotheses to/from a program to explain an observation. Note that normal abduction (as in [9]) considers only introducing hypotheses to explain an observation. An explanation (E, F) of an observation G is called minimal if for any explanation (E , F ) of G, E ⊆ E and F ⊆ F imply E = E and F = F. Example 2.1. Consider the abductive program P, H : P : flies(x) ← bird(x), not ab(x) , ab(x) ← broken-wing(x) , bird(tweety) ← , bird(opus) ← , broken-wing(tweety) ← . H : broken-wing(x) . The observation G = flies(tweety) has the minimal explanation (E, F) = (∅, {broken-wing(tweety)}). 1 This defines credulous explanations [15]. Skeptical explanations are used in [8]. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1023 3. NEGOTIATION 3.1 Conditional Proposals by Abduction We suppose an agent who has a knowledge base represented by an abductive program P, H . A program P consists of two types of knowledge, belief B and desire D, where B represents objective knowledge of an agent, while D represents subjective knowledge in general. We define P = B ∪ D, but do not distinguish B and D if such distinction is not important in the context. In contrast, abducibles H are used for representing permissible conditions to make a compromise in the process of negotiation. Definition 3.1. A proposal G is a conjunction of literals and NAF-literals: L1, ... , Lm, not Lm+1, ... , not Ln where every variable in G is existentially quantified at the front and range-restricted. In particular, G is called a critique if G = accept or G = reject where accept and reject are the reserved propositions. A counter-proposal is a proposal made in response to a proposal. Definition 3.2. A proposal G is accepted in an abductive program P, H if P has an answer set satisfying G. When a proposal is not accepted, abduction is used for seeking conditions to make it acceptable. Definition 3.3. Let P, H be an abductive program and G a proposal. If (E, F) is a minimal explanation of Gθ for some substitution θ in P, H , the conjunction G : Gθ, E, not F is called a conditional proposal (for G), where E, not F represents the conjunction: A1, ... , Ak, not Ak+1, ... , not Al for E = {A1, ... , Ak} and F = { Ak+1, ... , Al }. Proposition 3.1. Let P, H be an abductive program and G a proposal. If G is a conditional proposal, there is a belief set S of P, H satisfying G . Proof. When G = Gθ, E, not F, (P \ F) ∪ E has a consistent answer set S satisfying Gθ and E ∩ F = ∅. In this case, S satisfies Gθ, E, not F. A conditional proposal G provides a minimal requirement for accepting the proposal G. If Gθ has multiple minimal explanations, several conditional proposals exist accordingly. When (E, F) = (∅, ∅), a conditional proposal is used as a new proposal made in response to the proposal G. Example 3.1. An agent seeks a position of a research assistant at the computer department of a university with the condition that the salary is at least 50,000 USD per year. The agent makes his/her request as the proposal:2 G = assist(compt dept), salary(x), x ≥ 50, 000. The university has the abductive program P, H : P : salary(40, 000) ← assist(compt dept), not has PhD, salary(60, 000) ← assist(compt dept), has PhD, salary(50, 000) ← assist(math dept), salary(55, 000) ← system admin(compt dept), 2 For notational convenience, we often include mathematical (in)equations in proposals/programs. They are written by literals, for instance, x ≥ y by geq(x, y) with a suitable definition of the predicate geq. employee(x) ← assist(x), employee(x) ← system admin(x), assist(compt dept); assist(math dept) ; system admin(compt dept) ←, H : has PhD, where available positions are represented by disjunction. According to P, the base salary of a research assistant at the computer department is 40,000 USD, but if he/she has PhD, it is 60,000 USD. In this case, (E, F) = ({has PhD}, ∅) becomes the minimal explanation of Gθ = assist(compt dept), salary(60, 000) with θ = { x/60, 000 }. Then, the conditional proposal made by the university becomes assist(compt dept), salary(60, 000), has PhD . 3.2 Neighborhood Proposals by Relaxation When a proposal is unacceptable, an agent tries to construct a new counter-proposal by weakening constraints in the initial proposal. We use techniques of relaxation for this purpose. Relaxation is used as a technique of cooperative query answering in databases [4, 6]. When an original query fails in a database, relaxation expands the scope of the query by relaxing the constraints in the query. This allows the database to return neighborhood answers which are related to the original query. We use the technique for producing proposals in the process of negotiation. Definition 3.4. Let P, H be an abductive program and G a proposal. Then, G is relaxed to G in the following three ways: Anti-instantiation: Construct G such that G θ = G for some substitution θ. Dropping conditions: Construct G such that G ⊂ G. Goal replacement: If G is a conjunction G1, G2, where G1 and G2 are conjunctions, and there is a rule L ← G1 in P such that G1θ = G1 for some substitution θ, then build G as Lθ, G2. Here, Lθ is called a replaced literal. In each case, every variable in G is existentially quantified at the front and range-restricted. Anti-instantiation replaces constants (or terms) with fresh variables. Dropping conditions eliminates some conditions in a proposal. Goal replacement replaces the condition G1 in G with a literal Lθ in the presence of a rule L ← G1 in P under the condition G1θ = G1. All these operations generalize proposals in different ways. Each G obtained by these operations is called a relaxation of G. It is worth noting that these operations are also used in the context of inductive generalization [12]. The relaxed proposal can produce new offers which are neighbor to the original proposal. Definition 3.5. Let P, H be an abductive program and G a proposal. 1. Let G be a proposal obtained by anti-instantiation. If P has an answer set S which satisfies G θ for some substitution θ and G θ = G, G θ is called a neighborhood proposal by anti-instantiation. 2. Let G be a proposal obtained by dropping conditions. If P has an answer set S which satisfies G θ for some substitution θ, G θ is called a neighborhood proposal by dropping conditions. 1024 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. Let G be a proposal obtained by goal replacement. For a replaced literal L ∈ G and a rule H ← B in P such that L = Hσ and (G \ {L}) ∪ Bσ = G for some substitution σ, put G = (G \ {L}) ∪ Bσ. If P has an answer set S which satisfies G θ for some substitution θ, G θ is called a neighborhood proposal by goal replacement. Example 3.2. (cont. Example 3.1) Given the proposal G = assist(compt dept), salary(x), x ≥ 50, 000, • G1 = assist(w), salary(x), x ≥ 50, 000 is produced by substituting compt dept with a variable w. As G1θ1 = assist(math dept), salary(50, 000) with θ1 = { w/math dept } is satisfied by an answer set of P, G1θ1 becomes a neighborhood proposal by anti-instantiation. • G2 = assist(compt dept), salary(x) is produced by dropping the salary condition x ≥ 50, 000. As G2θ2 = assist(compt dept), salary(40, 000) with θ2 = { x/40, 000 } is satisfied by an answer set of P, G2θ2 becomes a neighborhood proposal by dropping conditions. • G3 = employee(compt dept), salary(x), x ≥ 50, 000 is produced by replacing assist(compt dept) with employee(compt dept) using the rule employee(x) ← assist(x) in P. By G3 and the rule employee(x) ← system admin(x) in P, G3 = sys admin(compt dept), salary(x), x ≥ 50, 000 is produced. As G3 θ3 = sys admin(compt dept), salary(55, 000) with θ3 = { x/55, 000 } is satisfied by an answer set of P, G3 θ3 becomes a neighborhood proposal by goal replacement. Finally, extended abduction and relaxation are combined to produce conditional neighborhood proposals. Definition 3.6. Let P, H be an abductive program and G a proposal. 1. Let G be a proposal obtained by either anti-instantiation or dropping conditions. If (E, F) is a minimal explanation of G θ(= G) for some substitution θ, the conjunction G θ, E, not F is called a conditional neighborhood proposal by anti-instantiation/dropping conditions. 2. Let G be a proposal obtained by goal replacement. Suppose G as in Definition 3.5(3). If (E, F) is a minimal explanation of G θ for some substitution θ, the conjunction G θ, E, not F is called a conditional neighborhood proposal by goal replacement. A conditional neighborhood proposal reduces to a neighborhood proposal when (E, F) = (∅, ∅). 3.3 Negotiation Protocol A negotiation protocol defines how to exchange proposals in the process of negotiation. This section presents a negotiation protocol in our framework. We suppose one-to-one negotiation between two agents who have a common ontology and the same language for successful communication. Definition 3.7. A proposal L1, ..., Lm, not Lm+1, ..., not Ln violates an integrity constraint ← body+ (r), not body− (r) if for any substitution θ, there is a substitution σ such that body+ (r)σ ⊆ { L1θ, ... , Lmθ }, body− (r)σ∩{ L1θ, ... , Lmθ } = ∅, and body− (r)σ ⊆ { Lm+1θ, ... , Lnθ }. Integrity constraints are conditions which an agent should satisfy, so that they are used to explain why an agent does not accept a proposal. A negotiation proceeds in a series of rounds. Each i-th round (i ≥ 1) consists of a proposal Gi 1 made by one agent Ag1 and another proposal Gi 2 made by the other agent Ag2. Definition 3.8. Let P1, H1 be an abductive program of an agent Ag1 and Gi 2 a proposal made by Ag2 at the i-th round. A critique set of Ag1 (at the i-th round) is a set CSi 1(P1, Gj 2) = CSi−1 1 (P1, Gj−1 2 ) ∪ { r | r is an integrity constraint in P1 and Gj 2 violates r } where j = i − 1 or i, and CS0 1 (P1, G0 2) = CS1 1 (P1, G0 2) = ∅. A critique set of an agent Ag1 accumulates integrity constraints which are violated by proposals made by another agent Ag2. CSi 2(P2, Gj 1) is defined in the same manner. Definition 3.9. Let Pk, Hk be an abductive program of an agent Agk and Gj a proposal, which is not a critique, made by any agent at the j(≤ i)-th round. A negotiation set of Agk (at the i-th round) is a triple NSi k = (Si c, Si n, Si cn), where Si c is the set of conditional proposals, Si n is the set of neighborhood proposals, and Si cn is the set of conditional neighborhood proposals, produced by Gj and Pk, Hk . A negotiation set represents the space of possible proposals made by an agent. Si x (x ∈ {c, n, cn}) accumulates proposals produced by Gj (1 ≤ j ≤ i) according to Definitions 3.3, 3.5, and 3.6. Note that an agent can construct counter-proposals by modifying its own previous proposals or another agent``s proposals. An agent Agk accumulates proposals that are made by Agk but are rejected by another agent, in the failed proposal set FP i k (at the i-th round), where FP 0 k = ∅. Suppose two agents Ag1 and Ag2 who have abductive programs P1, H1 and P2, H2 , respectively. Given a proposal G1 1 which is satisfied by an answer set of P1, a negotiation starts. In response to the proposal Gi 1 made by Ag1 at the i-th round, Ag2 behaves as follows. 1. If Gi 1 = accept, an agreement is reached and negotiation ends in success. 2. Else if Gi 1 = reject, put FP i 2 = FPi−1 2 ∪{Gi−1 2 } where {G0 2} = ∅. Proceed to the step 4(b). 3. Else if P2 has an answer set satisfying Gi 1, Ag2 returns Gi 2 = accept to Ag1. Negotiation ends in success. 4. Otherwise, Ag2 behaves as follows. Put FP i 2 = FPi−1 2 . (a) If Gi 1 violates an integrity constraint in P2, return the critique Gi 2 = reject to Ag1, together with the critique set CSi 2(P2, Gi 1). (b) Otherwise, construct NSi 2 as follows. (i) Produce Si c. Let μ(Si c) = { p | p ∈ Si c \ FPi 2 and p satisfies the constraints in CSi 1(P1, Gi−1 2 )}. If μ(Si c) = ∅, select one from μ(Si c) and propose it as Gi 2 to Ag1; otherwise, go to (ii). (ii) Produce Si n. If μ(Si n) = ∅, select one from μ(Si n) and propose it as Gi 2 to Ag1; otherwise, go to (iii). (iii) Produce Si cn. If μ(Si cn) = ∅, select one from μ(Si cn) and propose it as Gi 2 to Ag1; otherwise, negotiation ends in failure. This means that Ag2 can make no counter-proposal or every counterproposal made by Ag2 is rejected by Ag1. In the step 4(a), Ag2 rejects the proposal Gi 1 and returns the reason of rejection as a critique set. This helps for Ag1 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1025 in preparing a next counter-proposal. In the step 4(b), Ag2 constructs a new proposal. In its construction, Ag2 should take care of the critique set CSi 1(P1, Gi−1 2 ), which represents integrity constraints, if any, accumulated in previous rounds, that Ag1 must satisfy. Also, FP i 2 is used for removing proposals which have been rejected. Construction of Si x (x ∈ {c, n, cn}) in NSi 2 is incrementally done by adding new counter-proposals produced by Gi 1 or Gi−1 2 to Si−1 x . For instance, Si n in NSi 2 is computed as Si n = Si−1 n ∪{ p | p is a neighborhood proposal made by Gi 1 } ∪ { p | p is a neighborhood proposal made by Gi−1 2 }, where S0 n = ∅. That is, Si n is constructed from Si−1 n by adding new proposals which are obtained by modifying the proposal Gi 1 made by Ag1 at the i-th round or modifying the proposal Gi−1 2 made by Ag2 at the (i − 1)-th round. Si c and Si cn are obtained as well. In the above protocol, an agent produces Si c at first, secondly Si n, and finally Si cn. This strategy seeks conditions which satisfy the given proposal, prior to neighborhood proposals which change the original one. Another strategy, which prefers neighborhood proposals to conditional ones, is also considered. Conditional neighborhood proposals are to be considered in the last place, since they differ from the original one to the maximal extent. The above protocol produces the candidate proposals in Si x for each x ∈ {c, n, cn} at once. We can consider a variant of the protocol in which each proposal in Si x is constructed one by one (see Example 3.3). The above protocol is repeatedly applied to each one of the two negotiating agents until a negotiation ends in success/failure. Formally, the above negotiation protocol has the following properties. Theorem 3.2. Let Ag1 and Ag2 be two agents having abductive programs P1, H1 and P2, H2 , respectively. 1. If P1, H1 and P2, H2 are function-free (i.e., both Pi and Hi contain no function symbol), any negotiation will terminate. 2. If a negotiation terminates with agreement on a proposal G, both P1, H1 and P2, H2 have belief sets satisfying G. Proof. 1. When an abductive program is function-free, abducibles and negotiation sets are both finite. Moreover, if a proposal is once rejected, it is not proposed again by the function μ. Thus, negotiation will terminate in finite steps. 2. When a proposal G is made by Ag1, P1, H1 has a belief set satisfying G. If the agent Ag2 accepts the proposal G, it is satisfied by an answer set of P2 which is also a belief set of P2, H2 . Example 3.3. Suppose a buying-selling situation in the introduction. A seller agent has the abductive program Ps, Hs in which Ps consists of belief Bs and desire Ds: Bs : pc(b1, 1G, 512M, 80G) ; pc(b2, 1G, 512M, 80G) ←,(1) dvd-rw ; cd-rw ←, (2) Ds : normal price(1300) ← pc(b1, 1G, 512M, 80G), dvd-rw, (3) normal price(1200) ← pc(b1, 1G, 512M, 80G), cd-rw, (4) normal price(1200) ← pc(b2, 1G, 512M, 80G), dvd-rw, (5) price(x) ← normal price(x), add point, (6) price(x ∗ 0.9) ← normal price(x), pay cash, not add point,(7) add point ←, (8) Hs : add point, pay cash. Here, (1) and (2) represent selection of products. The atom pc(b1, 1G, 512M, 80G) represents that the seller agent has a PC of the brand b1 such that CPU is 1GHz, memory is 512MB, and HDD is 80GB. Prices of products are represented as desire of the seller. The rules (3) - (5) are normal prices of products. A normal price is a selling price on the condition that service points are added (6). On the other hand, a discount price is applied if the paying method is cash and no service point is added (7). The fact (8) represents the addition of service points. This service would be withdrawn in case of discount prices, so add point is specified as an abducible. A buyer agent has the abductive program Pb, Hb in which Pb consists of belief Bb and desire Db: Bb : drive ← dvd-rw, (9) drive ← cd-rw, (10) price(x) ←, (11) Db : pc(b1, 1G, 512M, 80G) ←, (12) dvd-rw ←, (13) cd-rw ← not dvd-rw, (14) ← pay cash, (15) ← price(x), x > 1200, (16) Hb : dvd-rw. Rules (12) - (16) are the buyer``s desire. Among them, (15) and (16) impose constraints for buying a PC. A DVD-RW is specified as an abducible which is subject to concession. (1st round) First, the following proposal is given by the buyer agent: G1 b : pc(b1, 1G, 512M, 80G), dvd-rw, price(x), x ≤ 1200. As Ps has no answer set which satisfies G1 b , the seller agent cannot accept the proposal. The seller takes an action of making a counter-proposal and performs abduction. As a result, the seller finds the minimal explanation (E, F) = ({ pay cash }, { add point }) which explains G1 b θ1 with θ1 = { x/1170 }. The seller constructs the conditional proposal: G1 s : pc(b1, 1G, 512M, 80G), dvd-rw, price(1170), pay cash, not add point and offers it to the buyer. (2nd round) The buyer does not accept G1 s because he/she cannot pay it by cash (15). The buyer then returns the critique G2 b = reject to the seller, together with the critique set CS2 b (Pb, G1 s) = {(15)}. In response to this, the seller tries to make another proposal which satisfies the constraint in this critique set. As G1 s is stored in FP 2 s and no other conditional proposal satisfying the buyer``s requirement exists, the seller produces neighborhood proposals. He/she relaxes G1 b by dropping x ≤ 1200 in the condition, and produces pc(b1, 1G, 512M, 80G), dvd-rw, price(x). As Ps has an answer set which satisfies G2 s : pc(b1, 1G, 512M, 80G), dvd-rw, price(1300), 1026 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) the seller offers G2 s as a new counter-proposal. (3rd round) The buyer does not accept G2 s because he/she cannot pay more than 1200USD (16). The buyer again returns the critique G3 b = reject to the seller, together with the critique set CS3 b (Pb, G2 s) = CS2 b (Pb, G1 s) ∪ {(16)}. The seller then considers another proposal by replacing b1 with a variable w, G1 b now becomes pc(w, 1G, 512M, 80G), dvd-rw, price(x), x ≤ 1200. As Ps has an answer set which satisfies G3 s : pc(b2, 1G, 512M, 80G), dvd-rw, price(1200), the seller offers G3 s as a new counter-proposal. (4th round) The buyer does not accept G3 s because a PC of the brand b2 is out of his/her interest and Pb has no answer set satisfying G3 s. Then, the buyer makes a concession by changing his/her original goal. The buyer relaxes G1 b by goal replacement using the rule (9) in Pb, and produces pc(b1, 1G, 512M, 80G), drive, price(x), x ≤ 1200. Using (10), the following proposal is produced: pc(b1, 1G, 512M, 80G), cd-rw, price(x), x ≤ 1200. As Pb \ { dvd-rw } has a consistent answer set satisfying the above proposal, the buyer proposes the conditional neighborhood proposal G4 b : pc(b1, 1G, 512M, 80G), cd-rw, not dvd-rw, price(x), x ≤ 1200 to the seller agent. Since Ps also has an answer set satisfying G4 b , the seller accepts it and sends the message G4 s = accept to the buyer. Thus, the negotiation ends in success. 4. COMPUTATION In this section, we provide methods of computing proposals in terms of answer sets of programs. We first introduce some definitions from [15]. Definition 4.1. Given an abductive program P, H , the set UR of update rules is defined as: UR = { L ← not L, L ← not L | L ∈ H } ∪ { +L ← L | L ∈ H \ P } ∪ { −L ← not L | L ∈ H ∩ P } , where L, +L, and −L are new atoms uniquely associated with every L ∈ H. The atoms +L and −L are called update atoms. By the definition, the atom L becomes true iff L is not true. The pair of rules L ← not L and L ← not L specify the situation that an abducible L is true or not. When p(x) ∈ H and p(a) ∈ P but p(t) ∈ P for t = a, the rule +L ← L precisely becomes +p(t) ← p(t) for any t = a. In this case, the rule is shortly written as +p(x) ← p(x), x = a. Generally, the rule becomes +p(x) ← p(x), x = t1, ... , x = tn for n such instances. The rule +L ← L derives the atom +L if an abducible L which is not in P is to be true. In contrast, the rule −L ← not L derives the atom −L if an abducible L which is in P is not to be true. Thus, update atoms represent the change of truth values of abducibles in a program. That is, +L means the introduction of L, while −L means the deletion of L. When an abducible L contains variables, the associated update atom +L or −L is supposed to have exactly the same variables. In this case, an update atom is semantically identified with its ground instances. The set of all update atoms associated with the abducibles in H is denoted by UH, and UH = UH+ ∪ UH− where UH+ (resp. UH− ) is the set of update atoms of the form +L (resp. −L). Definition 4.2. Given an abductive program P, H , its update program UP is defined as the program UP = (P \ H) ∪ UR . An answer set S of UP is called U-minimal if there is no answer set T of UP such that T ∩ UH ⊂ S ∩ UH. By the definition, U-minimal answer sets exist whenever UP has answer sets. Update programs are used for computing (minimal) explanations of an observation. Given an observation G as a conjunction of literals and NAF-literals possibly containing variables, we introduce a new ground literal O together with the rule O ← G. In this case, O has an explanation (E, F) iff G has the same explanation. With this replacement, an observation is assumed to be a ground literal without loss of generality. In what follows, E+ = { +L | L ∈ E } and F − = { −L | L ∈ F } for E ⊆ H and F ⊆ H. Proposition 4.1. ([15]) Let P, H be an abductive program, UP its update program, and G a ground literal representing an observation. Then, a pair (E, F) is an explanation of G iff UP ∪ { ← not G } has a consistent answer set S such that E+ = S ∩ UH+ and F− = S ∩ UH− . In particular, (E, F) is a minimal explanation iff S is a U-minimal answer set. Example 4.1. To explain the observation G = flies(t) in the program P of Example 2.1, first construct the update program UP of P:3 UP : flies(x) ← bird(x), not ab(x), ab(x) ← broken-wing(x) , bird(t) ← , bird(o) ← , broken-wing(x) ← not broken-wing(x), broken-wing(x) ← not broken-wing(x), +broken-wing(x) ← broken-wing(x), x = t , −broken-wing(t) ← not broken-wing(t) . Next, consider the program UP ∪ { ← not flies(t) }. It has the single U-minimal answer set: S = { bird(t), bird(o), flies(t), flies(o), broken-wing(t), broken-wing(o), −broken-wing(t) }. The unique minimal explanation (E, F) = (∅, {broken-wing(t)}) of G is expressed by the update atom −broken-wing(t) in S ∩ UH− . Proposition 4.2. Let P, H be an abductive program and G a ground literal representing an observation. If P ∪ { ← not G } has a consistent answer set S, G has the minimal explanation (E, F) = (∅, ∅) and S satisfies G. Now we provide methods for computing (counter-)proposals. First, conditional proposals are computed as follows. input : an abductive program P, H , a proposal G; output : a set Sc of proposals. If G is a ground literal, compute its minimal explanation (E, F) in P, H using the update program. Put G, E, not F in Sc. Else if G is a conjunction possibly containing variables, consider the abductive program 3 t represents tweety and o represents opus. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1027 P ∪{ O ← G }, H with a ground literal O. Compute a minimal explanation of O in P ∪ { O ← G }, H using its update program. If O has a minimal explanation (E, F) with a substitution θ for variables in G, put Gθ, E, not F in Sc. Next, neighborhood proposals are computed as follows. input : an abductive program P, H , a proposal G; output : a set Sn of proposals. % neighborhood proposals by anti-instantiation; Construct G by anti-instantiation. For a ground literal O, if P ∪ { O ← G } ∪ { ← not O } has a consistent answer set satisfying G θ with a substitution θ and G θ = G, put G θ in Sn. % neighborhood proposals by dropping conditions; Construct G by dropping conditions. If G is a ground literal and the program P ∪ { ← not G } has a consistent answer set, put G in Sn. Else if G is a conjunction possibly containing variables, do the following. For a ground literal O, if P ∪{ O ← G }∪{ ← not O } has a consistent answer set satisfying G θ with a substitution θ, put G θ in Sn. % neighborhood proposals by goal replacement; Construct G by goal replacement. If G is a ground literal and there is a rule H ← B in P such that G = Hσ and Bσ = G for some substitution σ, put G = Bσ. If P ∪ { ← not G } has a consistent answer set satisfying G θ with a substitution θ, put G θ in Sn. Else if G is a conjunction possibly containing variables, do the following. For a replaced literal L ∈ G , if there is a rule H ← B in P such that L = Hσ and (G \ {L}) ∪ Bσ = G for some substitution σ, put G = (G \ {L}) ∪ Bσ. For a ground literal O, if P ∪ { O ← G } ∪ { ← not O } has a consistent answer set satisfying G θ with a substitution θ, put G θ in Sn. Theorem 4.3. The set Sc (resp. Sn) computed above coincides with the set of conditional proposals (resp. neighborhood proposals). Proof. The result for Sc follows from Definition 3.3 and Proposition 4.1. The result for Sn follows from Definition 3.5 and Proposition 4.2. Conditional neighborhood proposals are computed by combining the above two procedures. Those proposals are computed at each round. Note that the procedure for computing Sn contains some nondeterministic choices. For instance, there are generally several candidates of literals to relax in a proposal. Also, there might be several rules in a program for the usage of goal replacement. In practice, an agent can prespecify literals in a proposal for possible relaxation or rules in a program for the usage of goal replacement. 5. RELATED WORK As there are a number of literature on automated negotiation, this section focuses on comparison with negotiation frameworks based on logic and argumentation. Sadri et al. [14] use abductive logic programming as a representation language of negotiating agents. Agents negotiate using common dialogue primitives, called dialogue moves. Each agent has an abductive logic program in which a sequence of dialogues are specified by a program, a dialogue protocol is specified as constraints, and dialogue moves are specified as abducibles. The behavior of agents is regulated by an observe-think-act cycle. Once a dialogue move is uttered by an agent, another agent that observed the utterance thinks and acts using a proof procedure. Their approach and ours both employ abductive logic programming as a platform of agent reasoning, but the use of it is quite different. First, they use abducibles to specify dialogue primitives of the form tell(utterer, receiver, subject, identifier, time), while we use abducibles to specify arbitrary permissible hypotheses to construct conditional proposals. Second, a program pre-specifies a plan to carry out in order to achieve a goal, together with available/missing resources in the context of resource-exchanging problems. This is in contrast with our method in which possible counter-proposals are newly constructed in response to a proposal made by an agent. Third, they specify a negotiation policy inside a program (as integrity constraints), while we give a protocol independent of individual agents. They provide an operational model that completely specifies the behavior of agents in terms of agent cycle. We do not provide such a complete specification of the behavior of agents. Our primary interest is to mechanize construction of proposals. Bracciali and Torroni [2] formulate abductive agents that have knowledge in abductive logic programs. To explain an observation, two agents communicate by exchanging integrity constraints. In the process of communication, an agent can revise its own integrity constraints according to the information provided by the other agent. A set IC of integrity constraints relaxes a set IC (or IC tightens IC ) if any observation that can be proved with respect to IC can also be proved with respect to IC . For instance, IC : ← a, b, c relaxes IC : ← a, b. Thus, they use relaxation for weakening the constraints in an abductive logic program. In contrast, we use relaxation for weakening proposals and three different relaxation methods, anti-instantiation, dropping conditions, and goal replacement, are considered. Their goal is to explain an observation by revising integrity constraints of an agent through communication, while we use integrity constraints for communication to explain critiques and help other agents in making counter-proposals. Meyer et al. [11] introduce a logical framework for negotiating agents. They introduce two different modes of negotiation: concession and adaptation. They provide rational postulates to characterize negotiated outcomes between two agents, and describe methods for constructing outcomes. They provide logical conditions for negotiated outcomes to satisfy, but they do not describe a process of negotiation nor negotiation protocols. Moreover, they represent agents by classical propositional theories, which is different from our abductive logic programming framework. Foo et al. [5] model one-to-one negotiation as a one-time encounter between two extended logic programs. An agent offers an answer set of its program, and their mutual deal is regarded as a trade on their answer sets. Starting from the initial agreement set S∩T for an answer set S of an agent and an answer set T of another agent, each agent extends this set to reflect its own demand while keeping consistency with demand of the other agent. Their algorithm returns new programs having answer sets which are consistent with each other and keep the agreement set. The work is extended to repeated encounters in [3]. In their framework, two agents exchange answer sets to produce a common belief set, which is different from our framework of exchanging proposals. There are a number of proposals for negotiation based 1028 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) on argumentation. An advantage of argumentation-based negotiation is that it constructs a proposal with arguments supporting the proposal [1]. The existence of arguments is useful to convince other agents of reasons why an agent offers (counter-)proposals or returns critiques. Parsons et al. [13] develop a logic of argumentation-based negotiation among BDI agents. In one-to-one negotiation, an agent A generates a proposal together with its arguments, and passes it to another agent B. The proposal is evaluated by B which attempts to build arguments against it. If it conflicts with B``s interest, B informs A of its objection by sending back its attacking argument. In response to this, A tries to find an alternative way of achieving its original objective, or a way of persuading B to drop its objection. If either type of argument can be found, A will submit it to B. If B finds no reason to reject the new proposal, it will be accepted and the negotiation ends in success. Otherwise, the process is iterated. In this negotiation processes, the agent A never changes its original objective, so that negotiation ends in failure if A fails to find an alternative way of achieving the original objective. In our framework, when a proposal is rejected by another agent, an agent can weaken or change its objective by abduction and relaxation. Our framework does not have a mechanism of argumentation, but reasons for critiques can be informed by responding critique sets. Kakas and Moraitis [10] propose a negotiation protocol which integrates abduction within an argumentation framework. A proposal contains an offer corresponding to the negotiation object, together with supporting information representing conditions under which this offer is made. Supporting information is computed by abduction and is used for constructing conditional arguments during the process of negotiation. In their negotiation protocol, when an agent cannot satisfy its own goal, the agent considers the other agent``s goal and searches for conditions under which the goal is acceptable. Our present approach differs from theirs in the following points. First, they use abduction to seek conditions to support arguments, while we use abduction to seek conditions for proposals to accept. Second, in their negotiation protocol, counter-proposals are chosen among candidates based on preference knowledge of an agent at meta-level, which represents policy under which an agent uses its object-level decision rules according to situations. In our framework, counter-proposals are newly constructed using abduction and relaxation. The method of construction is independent of particular negotiation protocols. As [2, 10, 14], abduction or abductive logic programming used in negotiation is mostly based on normal abduction. In contrast, our approach is based on extended abduction which can not only introduce hypotheses but remove them from a program. This is another important difference. Relaxation and neighborhood query answering are devised to make databases cooperative with their users [4, 6]. In this sense, those techniques have the spirit similar to cooperative problem solving in multi-agent systems. As far as the authors know, however, there is no study which applies those technique to agent negotiation. 6. CONCLUSION In this paper we proposed a logical framework for negotiating agents. To construct proposals in the process of negotiation, we combined the techniques of extended abduction and relaxation. It was shown that these two operations are used for general inference rules in producing proposals. We developed a negotiation protocol between two agents based on exchange of proposals and critiques, and provided procedures for computing proposals in abductive logic programming. This enables us to realize automated negotiation on top of the existing answer set solvers. The present framework does not have a mechanism of selecting an optimal (counter-)proposal among different alternatives. To compare and evaluate proposals, an agent must have preference knowledge of candidate proposals. Further elaboration to maximize the utility of agents is left for future study. 7. REFERENCES [1] L. Amgoud, S. Parsons, and N. Maudet. Arguments, dialogue, and negotiation. In: Proc. ECAI-00, pp. 338-342, IOS Press, 2000. [2] A. Bracciali and P. Torroni. A new framework for knowledge revision of abductive agents through their interaction. In: Proc. CLIMA-IV, Computational Logic in Multi-Agent Systems, LNAI 3259, pp. 159-177, 2004. [3] W. Chen, M. Zhang, and N. Foo. Repeated negotiation of logic programs. In: Proc. 7th Workshop on Nonmonotonic Reasoning, Action and Change, 2006. [4] W. W. Chu, Q. Chen, and R.-C. Lee. Cooperative query answering via type abstraction hierarchy. In: Cooperating Knowledge Based Systems, S. M. Deen ed., pp. 271-290, Springer, 1990. [5] N. Foo, T. Meyer, Y. Zhang, and D. Zhang. Negotiating logic programs. In: Proc. 6th Workshop on Nonmonotonic Reasoning, Action and Change, 2005. [6] T. Gaasterland, P. Godfrey, and J. Minker. Relaxation as a platform for cooperative answering. Journal of Intelligence Information Systems 1(3/4):293-321, 1992. [7] M. Gelfond and V. Lifschitz. Classical negation in logic programs and disjunctive databases. New Generation Computing 9:365-385, 1991. [8] K. Inoue and C. Sakama. Abductive framework for nonmonotonic theory change. In: Proc. IJCAI-95, pp. 204-210, Morgan Kaufmann. [9] A. C. Kakas, R. A. Kowalski, and F. Toni, The role of abduction in logic programming. In: Handbook of Logic in AI and Logic Programming, D. M. Gabbay, et al. (eds), vol. 5, pp. 235-324, Oxford University Press, 1998. [10] A. C. Kakas and P. Moraitis. Adaptive agent negotiation via argumentation. In: Proc. AAMAS-06, pp. 384-391, ACM Press. [11] T. Meyer, N. Foo, R. Kwok, and D. Zhang. Logical foundation of negotiation: outcome, concession and adaptation. In: Proc. AAAI-04, pp. 293-298, MIT Press. [12] R. S. Michalski. A theory and methodology of inductive learning. In: Machine Learning: An Artificial Intelligence Approach, R. S. Michalski, et al. (eds), pp. 83-134, Morgan Kaufmann, 1983. [13] S. Parsons, C. Sierra and N. Jennings. Agents that reason and negotiate by arguing. Journal of Logic and Computation, 8(3):261-292, 1988. [14] F. Sadri, F. Toni, and P. Torroni, An abductive logic programming architecture for negotiating agents. In: Proc. 8th European Conf. on Logics in AI, LNAI 2424, pp. 419-431, Springer, 2002. [15] C. Sakama and K. Inoue. An abductive framework for computing knowledge base updates. Theory and Practice of Logic Programming 3(6):671-715, 2003. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1029
Negotiation by Abduction and Relaxation ABSTRACT This paper studies a logical framework for automated negotiation between two agents. We suppose an agent who has a knowledge base represented by a logic program. Then, we introduce methods of constructing counter-proposals in response to proposals made by an agent. To this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases. These techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation. We provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals. 1. INTRODUCTION Automated negotiation has been received increasing attention in multi-agent systems, and a number of frameworks have been proposed in different contexts ([1, 2, 3, 5, 10, 11, 13, 14], for instance). Negotiation usually proceeds in a series of rounds and each agent makes a proposal at every round. An agent that received a proposal responds in two ways. One is a critique which is a remark as to whether or not (parts of) the proposal is accepted. The other is a counter-proposal which is an alternative proposal made in response to a previous proposal [13]. To see these proposals in one-to-one negotiation, suppose the following negotiation dialogue between a buyer agent B and a seller agent S. (Bi (or Si) represents an utterance of B (or S) in the i-th round.) B1: I want to buy a personal computer of the brand b1, with the specification of CPU:1 GHz, Memory:512 MB, HDD: 80GB, and a DVD-RW driver. I want to get it at the price under 1200 USD. S1: We can provide a PC with the requested specification if you pay for it by cash. In this case, however, service points are not added for this special discount. B2: I cannot pay it by cash. S2: In a normal price, the requested PC costs 1300 USD. B3: I cannot accept the price. My budget is under 1200 USD. S3: We can provide another computer with the requested specification, except that it is made by the brand b2. The price is exactly 1200 USD. B4: I do not want a PC of the brand b2. Instead, I can downgrade a driver from DVD-RW to CD-RW in my initial proposal. S4: Ok, I accept your offer. In this dialogue, in response to the opening proposal B1, the counter-proposal S1 is returned. In the rest of the dialogue, B2, B3, S4 are critiques, while S2, S3, B4 are counterproposals. Critiques are produced by evaluating a proposal in a knowledge base of an agent. In contrast, making counter-proposals involves generating an alternative proposal which is more favorable to the responding agent than the original one. It is known that there are two ways of producing counterproposals: extending the initial proposal or amending part of the initial proposal. According to [13], the first type appears in the dialogue: A: "I propose that you provide me with service X". B: "I propose that I provide you with service X if you provide me with service Z". The second type is in the dialogue: A: "I propose that I provide you with service Y if you provide me with service X". B: "I propose that I provide you with service X if you provide me with service Z". A negotiation proceeds by iterating such "give-andtake" dialogues until it reaches an agreement/disagreement. In those dialogues, agents generate (counter -) proposals by reasoning on their own goals or objectives. The objective of the agent A in the above dialogues is to obtain service X. The agent B proposes conditions to provide the service. In the process of negotiation, however, it may happen that agents are obliged to weaken or change their initial goals to reach a negotiated compromise. In the dialogue of a buyer agent and a seller agent presented above, a buyer agent changes its initial goal by downgrading a driver from DVD-RW to CD-RW. Such behavior is usually represented as specific meta-knowledge of an agent or specified as negotiation protocols in particular problems. Currently, there is no computational logic for automated negotiation which has general inference rules for producing (counter -) proposals. The purpose of this paper is to mechanize a process of building (counter -) proposals in one-to-one negotiation dialogues. We suppose an agent who has a knowledge base represented by a logic program. We then introduce methods for generating three different types of proposals. First, we use the technique of extended abduction in artificial intelligence [8, 15] to construct a conditional proposal as an extension of the original one. Second, we use the technique of relaxation in cooperative query answering for databases [4, 6] to construct a neighborhood proposal as an amendment of the original one. Third, combining extended abduction and relaxation, conditional neighborhood proposals are constructed as amended extensions of the original proposal. We develop a negotiation protocol between two agents based on the exchange of these counter-proposals and critiques. We also provide procedures for computing proposals in logic programming. This paper is organized as follows. Section 2 introduces a logical framework used in this paper. Section 3 presents methods for constructing proposals, and provides a negotiation protocol. Section 4 provides methods for computing proposals in logic programming. Section 5 discusses related works, and Section 6 concludes the paper. 2. PRELIMINARIES Logic programs considered in this paper are extended disjunctive programs (EDP) [7]. An EDP (or simply a program) is a set of rules of the form: (n ≥ m ≥ l ≥ 0) where each Li is a positive/negative literal, i.e., A or ¬ A for an atom A, and not is negation as failure (NAF). not L is called an NAF-literal. The symbol ";" represents disjunction. The left-hand side of the rule is the head, and the right-hand side is the body. For each rule r of the above form, head (r), body + (r) and body--(r) denote the sets of literals {L1,..., Ll}, {Ll +1,..., Lm}, and {Lm +1,..., Ln}, respectively. Also, not body--(r) denotes the set of NAF-literals {not Lm +1,..., not Ln}. A disjunction of literals and a conjunction of (NAF -) literals in a rule are identified with its corresponding sets of literals. A rule r is often written as head (r) ← body + (r), not body--(r) or head (r) ← body (r) where body (r) = body + (r) ∪ not body--(r). A rule r is disjunctive if head (r) contains more than one literal. A rule r is an integrity constraint if head (r) = ∅; and r is a fact if body (r) = ∅. A program is NAF-free if no rule contains NAF-literals. Two rules/literals are identified with respect to variable renaming. A substitution is a mapping from variables to terms 0 = {x1/t1,..., xn/tn}, where x1,..., xn are distinct variables and each ti is a term distinct from xi. Given a conjunction G of (NAF -) literals, G0 denotes the conjunction obtained by applying 0 to G. A program, rule, or literal is ground if it contains no variable. A program P with variables is a shorthand of its ground instantiation Ground (P), the set of ground rules obtained from P by substituting variables in P by elements of its Herbrand universe in every possible way. The semantics of an EDP is defined by the answer set semantics [7]. Let Lit be the set of all ground literals in the language of a program. Suppose a program P and a set of literals S (⊆ Lit). Then, the reduct P S is the program which contains the ground rule head (r) ← body + (r) iff there is a rule r in Ground (P) such that body--(r) ∩ S = ∅. Given an NAF-free EDP P, Cn (P) denotes the smallest set of ground literals which is (i) closed under P, i.e., for every ground rule r in Ground (P), body (r) ⊆ Cn (P) implies head (r) ∩ Cn (P) = ~ ∅; and (ii) logically closed, i.e., it is either consistent or equal to Lit. Given an EDP P and a set S of literals, S is an answer set of P if S = Cn (P S). A program has none, one, or multiple answer sets in general. An answer set is consistent if it is not Lit. A program P is consistent if it has a consistent answer set; otherwise, P is inconsistent. Abductive logic programming [9] introduces a mechanism of hypothetical reasoning to logic programming. An abductive framework used in this paper is the extended abduction introduced by Inoue and Sakama [8, 15]. An abductive program is a pair P, H where P is an EDP and H is a set of literals called abducibles. When a literal L ∈ H contains variables, any instance of L is also an abducible. An abductive program P, H is consistent if P is consistent. Throughout the paper, abductive programs are assumed to be consistent unless stated otherwise. Let G = L1,..., Lm, not Lm +1,..., not Ln be a conjunction, where all variables in G are existentially quantified at the front and range-restricted, i.e., every variable in Lm +1,..., Ln appears in L1,..., Lm. A set S of ground literals satisfies the conjunction G if {L10,..., Lm0} ⊆ S and {Lm +10,..., Ln0} ∩ S = ∅ for some ground instance G0 with a substitution 0. Let P, H be an abductive program and G a conjunction as above. A pair (E, F) is an explanation of an observation 1. (P \ F) ∪ E has an answer set which satisfies G, 2. (P \ F) ∪ E is consistent, 3. E and F are sets of ground literals such that E ⊆ H \ P and F ⊆ H ∩ P. When (P \ F) ∪ E has an answer set S satisfying the above three conditions, S is called a belief set of an abductive pro gram P, H satisfying G (with respect to (E, F)). Note that if P has a consistent answer set S satisfying G, S is also a belief set of P, H satisfying G with respect to (E, F) = (∅, ∅). Extended abduction introduces/removes hypotheses to/from a program to explain an observation. Note that "normal" abduction (as in [9]) considers only introducing hypotheses to explain an observation. An explanation (E, F) of an observation G is called minimal if for any explanation (E', F') of G, E' ⊆ E and F' ⊆ F imply E' = E and F' = F. 3. NEGOTIATION 3.1 Conditional Proposals by Abduction We suppose an agent who has a knowledge base represented by an abductive program (P, H). A program P consists of two types of knowledge, belief B and desire D, where B represents objective knowledge of an agent, while D represents subjective knowledge in general. We define P = B U D, but do not distinguish B and D if such distinction is not important in the context. In contrast, abducibles H are used for representing permissible conditions to make a compromise in the process of negotiation. where every variable in G is existentially quantified at the front and range-restricted. In particular, G is called a critique if G = accept or G = reject where accept and reject are the reserved propositions. A counter-proposal is a proposal made in response to a proposal. DEFINITION 3.2. A proposal G is accepted in an abductive program (P, H) if P has an answer set satisfying G. When a proposal is not accepted, abduction is used for seeking conditions to make it acceptable. DEFINITION 3.3. Let (P, H) be an abductive program and G a proposal. If (E, F) is a minimal explanation of Gθ for some substitution θ in (P, H), the conjunction G': PROOF. When G' = Gθ, E, not F, (P \ F) U E has a consistent answer set S satisfying Gθ and E n F = 0. In this case, S satisfies Gθ, E, not F. A conditional proposal G' provides a minimal requirement for accepting the proposal G. If Gθ has multiple minimal explanations, several conditional proposals exist accordingly. When (E, F) = ~ (0, 0), a conditional proposal is used as a new proposal made in response to the proposal G. EXAMPLE 3.1. An agent seeks a position of a research assistant at the computer department of a university with the condition that the salary is at least 50,000 USD per year. The agent makes his/her request as the proposal:2 2For notational convenience, we often include mathematical (in) equations in proposals/programs. They are written by literals, for instance, x> y by geq (x, y) with a suitable definition of the predicate geq. where available positions are represented by disjunction. According to P, the base salary of a research assistant at the computer department is 40,000 USD, but if he/she has PhD, it is 60,000 USD. In this case, (E, F) = ({has PhD1, 0) becomes the minimal explanation of Gθ = assist (compt dept), salary (60, 000) with θ = {x/60, 000 1. Then, the conditional proposal made by the university becomes assist (compt dept), salary (60, 000), has PhD. 3.2 Neighborhood Proposals by Relaxation When a proposal is unacceptable, an agent tries to construct a new counter-proposal by weakening constraints in the initial proposal. We use techniques of relaxation for this purpose. Relaxation is used as a technique of cooperative query answering in databases [4, 6]. When an original query fails in a database, relaxation expands the scope of the query by relaxing the constraints in the query. This allows the database to return "neighborhood" answers which are related to the original query. We use the technique for producing proposals in the process of negotiation. DEFINITION 3.4. Let (P, H) be an abductive program and G a proposal. Then, G is relaxed to G' in the following three ways: Anti-instantiation: Construct G' such that G' θ = G for some substitution θ. Dropping conditions: Construct G' such that G' C G. Goal replacement: If G is a conjunction "G1, G2", where G1 and G2 are conjunctions, and there is a rule L +--G' 1 in P such that G' 1θ = G1 for some substitution θ, then build G' as Lθ, G2. Here, Lθ is called a replaced literal. In each case, every variable in G' is existentially quantified at the front and range-restricted. Anti-instantiation replaces constants (or terms) with fresh variables. Dropping conditions eliminates some conditions in a proposal. Goal replacement replaces the condition G1 in G with a literal Lθ in the presence of a rule L +--G' 1 in P under the condition G' 1θ = G1. All these operations generalize proposals in different ways. Each G' obtained by these operations is called a relaxation of G. It is worth noting that these operations are also used in the context of inductive generalization [12]. The relaxed proposal can produce new offers which are neighbor to the original proposal. DEFINITION 3.5. Let (P, H) be an abductive program and G a proposal. 1. Let G' be a proposal obtained by anti-instantiation. If P has an answer set S which satisfies G' θ for some substitution θ and G' θ = ~ G, G' θ is called a neighborhood proposal by anti-instantiation. 2. Let G' be a proposal obtained by dropping conditions. If P has an answer set S which satisfies G' θ for some substitution θ, G' θ is called a neighborhood proposal by dropping conditions. 1024 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. Let G ~ be a proposal obtained by goal replacement. For a replaced literal L E G ~ and a rule H <--B in P such that L = Hor and (G ~ \ {L}) U Bor = ~ G for some substitution or, put G ~ ~ = (G ~ \ {L}) U Bor. If P has an answer set S which satisfies G ~ ~ 0 for some substitution 0, G ~ ~ 0 is called a neighborhood proposal by goal replacement. EXAMPLE 3.2. (cont. Example 3.1) Given the proposal G = assist (compt dept), salary (x), x> 50, 000, • G ~ 1 = assist (w), salary (x), x> 50, 000 is produced by substituting compt dept with a variable w. As with 01 = {w/math dept} is satisfied by an answer set of P, G ~ 101 becomes a neighborhood proposal by anti-instantiation. • G ~ 2 = assist (compt dept), salary (x) is produced by dropping the salary condition x> 50, 000. As with 02 = {x/40, 000} is satisfied by an answer set of P, G ~ 202 becomes a neighborhood proposal by dropping conditions. • G ~ 3 = employee (compt dept), salary (x), x> 50, 000 is produced by replacing assist (compt dept) with employee (compt dept) using the rule employee (x) ← assist (x) in P. By G ~ 3 and the rule employee (x) ← system admin (x) in P, G ~ ~ 3 = sys admin (compt dept), salary (x), x> 50, 000 is produced. As with 03 = {x/55, 000} is satisfied by an answer set of P, G ~ ~ 3 03 becomes a neighborhood proposal by goal replacement. Finally, extended abduction and relaxation are combined to produce conditional neighborhood proposals. DEFINITION 3.6. Let (P, H) be an abductive program and G a proposal. 1. Let G ~ be a proposal obtained by either anti-instantiation or dropping conditions. If (E, F) is a minimal explanation of G ~ 0 (~ = G) for some substitution 0, the conjunction G ~ 0, E, not F is called a conditional neighborhood proposal by anti-instantiation/dropping conditions. 2. Let G ~ be a proposal obtained by goal replacement. Suppose G ~ ~ as in Definition 3.5 (3). If (E, F) is a minimal explanation of G ~ ~ 0 for some substitution 0, the conjunction G ~ ~ 0, E, not F is called a conditional neighborhood proposal by goal replacement. A conditional neighborhood proposal reduces to a neighborhood proposal when (E, F) = (0, 0). 3.3 Negotiation Protocol A negotiation protocol defines how to exchange proposals in the process of negotiation. This section presents a negotiation protocol in our framework. We suppose one-to-one negotiation between two agents who have a common ontology and the same language for successful communication. Integrity constraints are conditions which an agent should satisfy, so that they are used to explain why an agent does not accept a proposal. A negotiation proceeds in a series of rounds. Each i-th round (i> 1) consists of a proposal Gi1 made by one agent Ag1 and another proposal Gi2 made by the other agent Ag2. DEFINITION 3.8. Let (P1, H1) be an abductive program of an agent Ag1 and Gi2 a proposal made by Ag2 at the i-th round. A critique set of Ag1 (at the i-th round) is a set constraint in P1 and Gj2 violates r} where j = i − 1 or i, and CS01 (P1, G02) = CS11 (P1, G02) = 0. A critique set of an agent Ag1 accumulates integrity constraints which are violated by proposals made by another agent Ag2. CSi2 (P2, Gj 1) is defined in the same manner. DEFINITION 3.9. Let (Pk, Hk) be an abductive program of an agent Agk and Gj a proposal, which is not a critique, made by any agent at the j (<i) - th round. A negotiation set of Agk (at the i-th round) is a triple NSik = (Sic, Si n, Sicn), where Sic is the set of conditional proposals, Sin is the set of neighborhood proposals, and Sicn is the set of conditional neighborhood proposals, produced by Gj and (Pk, Hk). A negotiation set represents the space of possible proposals made by an agent. Six (x E {c, n, cn}) accumulates proposals produced by Gj (1 <j <i) according to Definitions 3.3, 3.5, and 3.6. Note that an agent can construct counter-proposals by modifying its own previous proposals or another agent's proposals. An agent Agk accumulates proposals that are made by Agk but are rejected by another agent, in the failed proposal set FPki (at the i-th round), where FPk0 = 0. Suppose two agents Ag1 and Ag2 who have abductive programs (P1, H1) and (P2, H2), respectively. Given a proposal G11 which is satisfied by an answer set of P1, a negotiation starts. In response to the proposal Gi1 made by Ag1 at the i-th round, Ag2 behaves as follows. 1. If Gi1 = accept, an agreement is reached and negotiation ends in success. 2. Else if Gi1 = reject, put FP2i = FP2i − 1 U {Gi − 1 2} where {G02} = 0. Proceed to the step 4 (b). 3. Else if P2 has an answer set satisfying Gi1, Ag2 returns Gi2 = accept to Ag1. Negotiation ends in success. 4. Otherwise, Ag2 behaves as follows. Put FP2i = FP i − 1 2. (a) If Gi1 violates an integrity constraint in P2, return the critique Gi2 = reject to Ag1, together with the critique set CSi2 (P2, Gi1). (b) Otherwise, construct NSi2 as follows. (i) Produce Sic. Let μ (Sic) = {p | p E Sic \ FP2i and p satisfies the constraints in CSi1 (P1, Gi − 1 2)}. If μ (Sic) = ~ 0, select one from μ (Sic) and propose it as Gi2 to Ag1; otherwise, go to (ii). (ii) Produce Sin. If μ (Sin) = ~ 0, select one from μ (Sin) and propose it as Gi2 to Ag1; otherwise, go to (iii). (iii) Produce Sicn. If μ (Sicn) = ~ 0, select one from μ (Sicn) and propose it as Gi2 to Ag1; otherwise, negotiation ends in failure. This means that Ag2 can make no counter-proposal or every counterproposal made by Ag2 is rejected by Ag1. In the step 4 (a), Ag2 rejects the proposal Gi1 and returns the reason of rejection as a critique set. This helps for Ag1 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1025 in preparing a next counter-proposal. In the step 4 (b), Ag2 constructs a new proposal. In its construction, Ag2 should take care of the critique set CSi 1 (P1, Gi-1 2), which represents integrity constraints, if any, accumulated in previous rounds, that Ag1 must satisfy. Also, FP2i is used for removing proposals which have been rejected. Construction of Six (x E {c, n, cn}) in NSi2 is incrementally done by adding new counter-proposals produced by Gi1 or Gi-12 to Si-1 x. For instance, Sin in NSi2 is computed as where S0n = 0. That is, Sin is constructed from Si-1 n by adding new proposals which are obtained by modifying the proposal Gi1 made by Ag1 at the i-th round or modifying the proposal Gi-1 2 made by Ag2 at the (i − 1) - th round. Sic and Sicn are obtained as well. In the above protocol, an agent produces Sic at first, secondly Si n, and finally Sicn. This strategy seeks conditions which satisfy the given proposal, prior to neighborhood proposals which change the original one. Another strategy, which prefers neighborhood proposals to conditional ones, is also considered. Conditional neighborhood proposals are to be considered in the last place, since they differ from the original one to the maximal extent. The above protocol produces the candidate proposals in Six for each x E {c, n, cn} at once. We can consider a variant of the protocol in which each proposal in Six is constructed one by one (see Example 3.3). The above protocol is repeatedly applied to each one of the two negotiating agents until a negotiation ends in success/failure. Formally, the above negotiation protocol has the following properties. THEOREM 3.2. Let Ag1 and Ag2 be two agents having abductive programs (P1, H1) and (P2, H2), respectively. 1. If (P1, H1) and (P2, H2) are function-free (i.e., both Pi and Hi contain no function symbol), any negotiation will terminate. 2. If a negotiation terminates with agreement on a proposal G, both (P1, H1) and (P2, H2) have belief sets satisfying G. PROOF. 1. When an abductive program is function-free, abducibles and negotiation sets are both finite. Moreover, if a proposal is once rejected, it is not proposed again by the function μ. Thus, negotiation will terminate in finite steps. 2. When a proposal G is made by Ag1, (P1, H1) has a belief set satisfying G. If the agent Ag2 accepts the proposal G, it is satisfied by an answer set of P2 which is also a belief set of (P2, H2). Here, (1) and (2) represent selection of products. The atom pc (b1, 1G, 512M, 80G) represents that the seller agent has a PC of the brand b1 such that CPU is 1GHz, memory is 512MB, and HDD is 80GB. Prices of products are represented as desire of the seller. The rules (3)--(5) are normal prices of products. A normal price is a selling price on the condition that service points are added (6). On the other hand, a discount price is applied if the paying method is cash and no service point is added (7). The fact (8) represents the addition of service points. This service would be withdrawn in case of discount prices, so add point is specified as an abducible. A buyer agent has the abductive program (Pb, Hb) in which Pb consists of belief Bb and desire Db: and (16) impose constraints for buying a PC. A DVD-RW is specified as an abducible which is subject to concession. (1st round) First, the following proposal is given by the buyer agent: As Ps has no answer set which satisfies G1b, the seller agent cannot accept the proposal. The seller takes an action of making a counter-proposal and performs abduction. As a result, the seller finds the minimal explanation (E, F) = ({pay cash}, {add point}) which explains G1b01 with 01 = {x/1170}. The seller constructs the conditional proposal: G1s: pc (b1, 1G, 512M, 80G), dvd-rw, price (1170), pay cash, not add point and offers it to the buyer. (2nd round) The buyer does not accept G1s because he/she cannot pay it by cash (15). The buyer then returns the critique G2b = reject to the seller, together with the critique set CS2 b (Pb, G1 s) = {(15)}. In response to this, the seller tries to make another proposal which satisfies the constraint in this critique set. As G1s is stored in FPs2 and no other conditional proposal satisfying the buyer's requirement exists, the seller produces neighborhood proposals. He/she relaxes G1b by dropping x <1200 in the condition, and produces pc (b1, 1G, 512M, 80G), dvd-rw, price (x). As Ps has an answer set which satisfies G2s: pc (b1, 1G, 512M, 80G), dvd-rw, price (1300), 1026 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) the seller offers G2s as a new counter-proposal. (3rd round) The buyer does not accept G2s because he/she cannot pay more than 1200USD (16). The buyer again returns the critique G3b = reject to the seller, together with the critique set CS3b (Pb, G2s) = CS2b (Pb, G1s) U {(16)}. The seller then considers another proposal by replacing b1 with a variable w, G1b now becomes the seller offers G3s as a new counter-proposal. (4th round) The buyer does not accept G3s because a PC of the brand b2 is out of his/her interest and Pb has no answer set satisfying G3s. Then, the buyer makes a concession by changing his/her original goal. The buyer relaxes G1b by goal replacement using the rule (9) in Pb, and produces pc (b1, 1G, 512M, 80G), drive, price (x), x <1200. Using (10), the following proposal is produced: pc (b1, 1G, 512M, 80G), cd-rw, price (x), x <1200. As Pb \ {dvd-rw} has a consistent answer set satisfying the above proposal, the buyer proposes the conditional neighborhood proposal to the seller agent. Since Ps also has an answer set satisfying G4b, the seller accepts it and sends the message G4s = accept to the buyer. Thus, the negotiation ends in success. 4. COMPUTATION In this section, we provide methods of computing proposals in terms of answer sets of programs. We first introduce some definitions from [15]. DEFINITION 4.1. Given an abductive program (P, H), the set UR of update rules is defined as: where L, + L, and − L are new atoms uniquely associated with every L E H. The atoms + L and − L are called update atoms. By the definition, the atom L becomes true iff L is not true. The pair of rules L ← not L and L ← not L specify the situation that an abducible L is true or not. When p (x) E H and p (a) E P but p (t) E P for t = a, the rule + L ← L precisely becomes + p (t) ← p (t) for any t = a. In this case, the rule is shortly written as + p (x) ← p (x), x = a. Generally, the rule becomes + p (x) ← p (x), x = t1,..., x = tn for n such instances. The rule + L ← L derives the atom + L if an abducible L which is not in P is to be true. In contrast, the rule − L ← not L derives the atom − L if an abducible L which is in P is not to be true. Thus, update atoms represent the change of truth values of abducibles in a program. That is, + L means the introduction of L, while − L means the deletion of L. When an abducible L contains variables, the associated update atom + L or − L is supposed to have exactly the same variables. In this case, an update atom is semantically identified with its ground instances. The set of all update atoms associated with the abducibles in H is denoted by UH, and UH = UH + U UH--where UH + (resp. UH--) is the set of update atoms of the form + L (resp. − L). DEFINITION 4.2. Given an abductive program (P, H), its update program UP is defined as the program An answer set S of UP is called U-minimal if there is no answer set T of UP such that T n UH ⊂ S n UH. By the definition, U-minimal answer sets exist whenever UP has answer sets. Update programs are used for computing (minimal) explanations of an observation. Given an observation G as a conjunction of literals and NAF-literals possibly containing variables, we introduce a new ground literal O together with the rule O ← G. In this case, O has an explanation (E, F) iff G has the same explanation. With this replacement, an observation is assumed to be a ground literal without loss of generality. In what follows, E + = {+ L | L E E} and F--= {− L | L E F} for E ⊆ H and F ⊆ H. Next, consider the program UP U {← not flies (t)}. It has the single U-minimal answer set: S = {bird (t), bird (o), f lies (t), flies (o), broken-wing (t), broken-wing (o), − broken-wing (t)}. The unique minimal explanation (E, F) = (0, {broken-wing (t)}) of G is expressed by the update atom − broken-wing (t) in S n UH--. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1027 (P U {O +--G}, H) with a ground literal O. Compute a minimal explanation of O in (P U {O +--G}, H) using its update program. If O has a minimal explanation (E, F) with a substitution 0 for variables in G, put "G0, E, not F" in S. . Next, neighborhood proposals are computed as follows. input: an abductive program (P, H), a proposal G; output: a set S,, of proposals. % neighborhood proposals by anti-instantiation; Construct G' by anti-instantiation. For a ground literal O, if P U {O +--G'} U {+--not O} has a consistent answer set satisfying G' 0 with a substitution 0 and G' 0 = ~ G, put G' 0 in S,,. % neighborhood proposals by dropping conditions; Construct G' by dropping conditions. If G' is a ground literal and the program P U {+--not G'} has a consistent answer set, put G' in S,,. Else if G' is a conjunction possibly containing variables, do the following. For a ground literal O, if P U {O +--G'} U {+--not O} has a consistent answer set satisfying G' 0 with a substitution 0, put G' 0 in S,,. % neighborhood proposals by goal replacement; Construct G' by goal replacement. If G' is a ground literal and there is a rule H +--B in P such that G' = Hor and Bor = ~ G for some substitution or, put G" = Bor. If P U {+--not G'} has a consistent answer set satisfying G" 0 with a substitution 0, put G" 0 in S,,. Else if G' is a conjunction possibly containing variables, do the following. For a replaced literal L E G', if there is a rule H +--B in P such that L = Hor and (G' \ {L}) U Bor = ~ G for some substitution or, put G" = (G' \ {L}) U Bor. For a ground literal O, if P U {O +--G"} U {+--not O} has a consistent answer set satisfying G" 0 with a substitution 0, put G" 0 in S,,. THEOREM 4.3. The set S. (resp. S,,) computed above coincides with the set of conditional proposals (resp. neighborhood proposals). PROOF. The result for S. follows from Definition 3.3 and Proposition 4.1. The result for S,, follows from Definition 3.5 and Proposition 4.2. Conditional neighborhood proposals are computed by combining the above two procedures. Those proposals are computed at each round. Note that the procedure for computing S,, contains some nondeterministic choices. For instance, there are generally several candidates of literals to relax in a proposal. Also, there might be several rules in a program for the usage of goal replacement. In practice, an agent can prespecify literals in a proposal for possible relaxation or rules in a program for the usage of goal replacement. 5. RELATED WORK As there are a number of literature on automated negotiation, this section focuses on comparison with negotiation frameworks based on logic and argumentation. Sadri et al. [14] use abductive logic programming as a representation language of negotiating agents. Agents negotiate using common dialogue primitives, called dialogue moves. Each agent has an abductive logic program in which a sequence of dialogues are specified by a program, a dialogue protocol is specified as constraints, and dialogue moves are specified as abducibles. The behavior of agents is regulated by an observe-think-act cycle. Once a dialogue move is uttered by an agent, another agent that observed the utterance thinks and acts using a proof procedure. Their approach and ours both employ abductive logic programming as a platform of agent reasoning, but the use of it is quite different. First, they use abducibles to specify dialogue primitives of the form tell (utterer, receiver, subject, identifier, time), while we use abducibles to specify arbitrary permissible hypotheses to construct conditional proposals. Second, a program pre-specifies a plan to carry out in order to achieve a goal, together with available/missing resources in the context of resource-exchanging problems. This is in contrast with our method in which possible counter-proposals are newly constructed in response to a proposal made by an agent. Third, they specify a negotiation policy inside a program (as integrity constraints), while we give a protocol independent of individual agents. They provide an operational model that completely specifies the behavior of agents in terms of agent cycle. We do not provide such a complete specification of the behavior of agents. Our primary interest is to mechanize construction of proposals. Bracciali and Torroni [2] formulate abductive agents that have knowledge in abductive logic programs. To explain an observation, two agents communicate by exchanging integrity constraints. In the process of communication, an agent can revise its own integrity constraints according to the information provided by the other agent. A set IC' of integrity constraints relaxes a set IC (or IC tightens IC') if any observation that can be proved with respect to IC can also be proved with respect to IC'. For instance, IC': +--a, b, c relaxes IC: +--a, b. Thus, they use relaxation for weakening the constraints in an abductive logic program. In contrast, we use relaxation for weakening proposals and three different relaxation methods, anti-instantiation, dropping conditions, and goal replacement, are considered. Their goal is to explain an observation by revising integrity constraints of an agent through communication, while we use integrity constraints for communication to explain critiques and help other agents in making counter-proposals. Meyer et al. [11] introduce a logical framework for negotiating agents. They introduce two different modes of negotiation: concession and adaptation. They provide rational postulates to characterize negotiated outcomes between two agents, and describe methods for constructing outcomes. They provide logical conditions for negotiated outcomes to satisfy, but they do not describe a process of negotiation nor negotiation protocols. Moreover, they represent agents by classical propositional theories, which is different from our abductive logic programming framework. Foo et al. [5] model one-to-one negotiation as a one-time encounter between two extended logic programs. An agent offers an answer set of its program, and their mutual deal is regarded as a trade on their answer sets. Starting from the initial agreement set SnT for an answer set S of an agent and an answer set T of another agent, each agent extends this set to reflect its own demand while keeping consistency with demand of the other agent. Their algorithm returns new programs having answer sets which are consistent with each other and keep the agreement set. The work is extended to repeated encounters in [3]. In their framework, two agents exchange answer sets to produce a common belief set, which is different from our framework of exchanging proposals. There are a number of proposals for negotiation based 1028 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) on argumentation. An advantage of argumentation-based negotiation is that it constructs a proposal with arguments supporting the proposal [1]. The existence of arguments is useful to convince other agents of reasons why an agent offers (counter -) proposals or returns critiques. Parsons et al. [13] develop a logic of argumentation-based negotiation among BDI agents. In one-to-one negotiation, an agent A generates a proposal together with its arguments, and passes it to another agent B. The proposal is evaluated by B which attempts to build arguments against it. If it conflicts with B's interest, B informs A of its objection by sending back its attacking argument. In response to this, A tries to find an alternative way of achieving its original objective, or a way of persuading B to drop its objection. If either type of argument can be found, A will submit it to B. If B finds no reason to reject the new proposal, it will be accepted and the negotiation ends in success. Otherwise, the process is iterated. In this negotiation processes, the agent A never changes its original objective, so that negotiation ends in failure if A fails to find an alternative way of achieving the original objective. In our framework, when a proposal is rejected by another agent, an agent can weaken or change its objective by abduction and relaxation. Our framework does not have a mechanism of argumentation, but reasons for critiques can be informed by responding critique sets. Kakas and Moraitis [10] propose a negotiation protocol which integrates abduction within an argumentation framework. A proposal contains an offer corresponding to the negotiation object, together with supporting information representing conditions under which this offer is made. Supporting information is computed by abduction and is used for constructing conditional arguments during the process of negotiation. In their negotiation protocol, when an agent cannot satisfy its own goal, the agent considers the other agent's goal and searches for conditions under which the goal is acceptable. Our present approach differs from theirs in the following points. First, they use abduction to seek conditions to support arguments, while we use abduction to seek conditions for proposals to accept. Second, in their negotiation protocol, counter-proposals are chosen among candidates based on preference knowledge of an agent at meta-level, which represents policy under which an agent uses its object-level decision rules according to situations. In our framework, counter-proposals are newly constructed using abduction and relaxation. The method of construction is independent of particular negotiation protocols. As [2, 10, 14], abduction or abductive logic programming used in negotiation is mostly based on normal abduction. In contrast, our approach is based on extended abduction which cannot only introduce hypotheses but remove them from a program. This is another important difference. Relaxation and neighborhood query answering are devised to make databases cooperative with their users [4, 6]. In this sense, those techniques have the spirit similar to cooperative problem solving in multi-agent systems. As far as the authors know, however, there is no study which applies those technique to agent negotiation. 6. CONCLUSION In this paper we proposed a logical framework for negotiating agents. To construct proposals in the process of negotiation, we combined the techniques of extended abduction and relaxation. It was shown that these two operations are used for general inference rules in producing proposals. We developed a negotiation protocol between two agents based on exchange of proposals and critiques, and provided procedures for computing proposals in abductive logic programming. This enables us to realize automated negotiation on top of the existing answer set solvers. The present framework does not have a mechanism of selecting an optimal (counter -) proposal among different alternatives. To compare and evaluate proposals, an agent must have preference knowledge of candidate proposals. Further elaboration to maximize the utility of agents is left for future study.
Negotiation by Abduction and Relaxation ABSTRACT This paper studies a logical framework for automated negotiation between two agents. We suppose an agent who has a knowledge base represented by a logic program. Then, we introduce methods of constructing counter-proposals in response to proposals made by an agent. To this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases. These techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation. We provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals. 1. INTRODUCTION Automated negotiation has been received increasing attention in multi-agent systems, and a number of frameworks have been proposed in different contexts ([1, 2, 3, 5, 10, 11, 13, 14], for instance). Negotiation usually proceeds in a series of rounds and each agent makes a proposal at every round. An agent that received a proposal responds in two ways. One is a critique which is a remark as to whether or not (parts of) the proposal is accepted. The other is a counter-proposal which is an alternative proposal made in response to a previous proposal [13]. To see these proposals in one-to-one negotiation, suppose the following negotiation dialogue between a buyer agent B and a seller agent S. (Bi (or Si) represents an utterance of B (or S) in the i-th round.) B1: I want to buy a personal computer of the brand b1, with the specification of CPU:1 GHz, Memory:512 MB, HDD: 80GB, and a DVD-RW driver. I want to get it at the price under 1200 USD. S1: We can provide a PC with the requested specification if you pay for it by cash. In this case, however, service points are not added for this special discount. B2: I cannot pay it by cash. S2: In a normal price, the requested PC costs 1300 USD. B3: I cannot accept the price. My budget is under 1200 USD. S3: We can provide another computer with the requested specification, except that it is made by the brand b2. The price is exactly 1200 USD. B4: I do not want a PC of the brand b2. Instead, I can downgrade a driver from DVD-RW to CD-RW in my initial proposal. S4: Ok, I accept your offer. In this dialogue, in response to the opening proposal B1, the counter-proposal S1 is returned. In the rest of the dialogue, B2, B3, S4 are critiques, while S2, S3, B4 are counterproposals. Critiques are produced by evaluating a proposal in a knowledge base of an agent. In contrast, making counter-proposals involves generating an alternative proposal which is more favorable to the responding agent than the original one. It is known that there are two ways of producing counterproposals: extending the initial proposal or amending part of the initial proposal. According to [13], the first type appears in the dialogue: A: "I propose that you provide me with service X". B: "I propose that I provide you with service X if you provide me with service Z". The second type is in the dialogue: A: "I propose that I provide you with service Y if you provide me with service X". B: "I propose that I provide you with service X if you provide me with service Z". A negotiation proceeds by iterating such "give-andtake" dialogues until it reaches an agreement/disagreement. In those dialogues, agents generate (counter -) proposals by reasoning on their own goals or objectives. The objective of the agent A in the above dialogues is to obtain service X. The agent B proposes conditions to provide the service. In the process of negotiation, however, it may happen that agents are obliged to weaken or change their initial goals to reach a negotiated compromise. In the dialogue of a buyer agent and a seller agent presented above, a buyer agent changes its initial goal by downgrading a driver from DVD-RW to CD-RW. Such behavior is usually represented as specific meta-knowledge of an agent or specified as negotiation protocols in particular problems. Currently, there is no computational logic for automated negotiation which has general inference rules for producing (counter -) proposals. The purpose of this paper is to mechanize a process of building (counter -) proposals in one-to-one negotiation dialogues. We suppose an agent who has a knowledge base represented by a logic program. We then introduce methods for generating three different types of proposals. First, we use the technique of extended abduction in artificial intelligence [8, 15] to construct a conditional proposal as an extension of the original one. Second, we use the technique of relaxation in cooperative query answering for databases [4, 6] to construct a neighborhood proposal as an amendment of the original one. Third, combining extended abduction and relaxation, conditional neighborhood proposals are constructed as amended extensions of the original proposal. We develop a negotiation protocol between two agents based on the exchange of these counter-proposals and critiques. We also provide procedures for computing proposals in logic programming. This paper is organized as follows. Section 2 introduces a logical framework used in this paper. Section 3 presents methods for constructing proposals, and provides a negotiation protocol. Section 4 provides methods for computing proposals in logic programming. Section 5 discusses related works, and Section 6 concludes the paper. 2. PRELIMINARIES Logic programs considered in this paper are extended disjunctive programs (EDP) [7]. An EDP (or simply a program) is a set of rules of the form: (n ≥ m ≥ l ≥ 0) where each Li is a positive/negative literal, i.e., A or ¬ A for an atom A, and not is negation as failure (NAF). not L is called an NAF-literal. The symbol ";" represents disjunction. The left-hand side of the rule is the head, and the right-hand side is the body. For each rule r of the above form, head (r), body + (r) and body--(r) denote the sets of literals {L1,..., Ll}, {Ll +1,..., Lm}, and {Lm +1,..., Ln}, respectively. Also, not body--(r) denotes the set of NAF-literals {not Lm +1,..., not Ln}. A disjunction of literals and a conjunction of (NAF -) literals in a rule are identified with its corresponding sets of literals. A rule r is often written as head (r) ← body + (r), not body--(r) or head (r) ← body (r) where body (r) = body + (r) ∪ not body--(r). A rule r is disjunctive if head (r) contains more than one literal. A rule r is an integrity constraint if head (r) = ∅; and r is a fact if body (r) = ∅. A program is NAF-free if no rule contains NAF-literals. Two rules/literals are identified with respect to variable renaming. A substitution is a mapping from variables to terms 0 = {x1/t1,..., xn/tn}, where x1,..., xn are distinct variables and each ti is a term distinct from xi. Given a conjunction G of (NAF -) literals, G0 denotes the conjunction obtained by applying 0 to G. A program, rule, or literal is ground if it contains no variable. A program P with variables is a shorthand of its ground instantiation Ground (P), the set of ground rules obtained from P by substituting variables in P by elements of its Herbrand universe in every possible way. The semantics of an EDP is defined by the answer set semantics [7]. Let Lit be the set of all ground literals in the language of a program. Suppose a program P and a set of literals S (⊆ Lit). Then, the reduct P S is the program which contains the ground rule head (r) ← body + (r) iff there is a rule r in Ground (P) such that body--(r) ∩ S = ∅. Given an NAF-free EDP P, Cn (P) denotes the smallest set of ground literals which is (i) closed under P, i.e., for every ground rule r in Ground (P), body (r) ⊆ Cn (P) implies head (r) ∩ Cn (P) = ~ ∅; and (ii) logically closed, i.e., it is either consistent or equal to Lit. Given an EDP P and a set S of literals, S is an answer set of P if S = Cn (P S). A program has none, one, or multiple answer sets in general. An answer set is consistent if it is not Lit. A program P is consistent if it has a consistent answer set; otherwise, P is inconsistent. Abductive logic programming [9] introduces a mechanism of hypothetical reasoning to logic programming. An abductive framework used in this paper is the extended abduction introduced by Inoue and Sakama [8, 15]. An abductive program is a pair P, H where P is an EDP and H is a set of literals called abducibles. When a literal L ∈ H contains variables, any instance of L is also an abducible. An abductive program P, H is consistent if P is consistent. Throughout the paper, abductive programs are assumed to be consistent unless stated otherwise. Let G = L1,..., Lm, not Lm +1,..., not Ln be a conjunction, where all variables in G are existentially quantified at the front and range-restricted, i.e., every variable in Lm +1,..., Ln appears in L1,..., Lm. A set S of ground literals satisfies the conjunction G if {L10,..., Lm0} ⊆ S and {Lm +10,..., Ln0} ∩ S = ∅ for some ground instance G0 with a substitution 0. Let P, H be an abductive program and G a conjunction as above. A pair (E, F) is an explanation of an observation 1. (P \ F) ∪ E has an answer set which satisfies G, 2. (P \ F) ∪ E is consistent, 3. E and F are sets of ground literals such that E ⊆ H \ P and F ⊆ H ∩ P. When (P \ F) ∪ E has an answer set S satisfying the above three conditions, S is called a belief set of an abductive pro gram P, H satisfying G (with respect to (E, F)). Note that if P has a consistent answer set S satisfying G, S is also a belief set of P, H satisfying G with respect to (E, F) = (∅, ∅). Extended abduction introduces/removes hypotheses to/from a program to explain an observation. Note that "normal" abduction (as in [9]) considers only introducing hypotheses to explain an observation. An explanation (E, F) of an observation G is called minimal if for any explanation (E', F') of G, E' ⊆ E and F' ⊆ F imply E' = E and F' = F. 3. NEGOTIATION 3.1 Conditional Proposals by Abduction 3.2 Neighborhood Proposals by Relaxation 1024 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.3 Negotiation Protocol The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1025 1026 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. COMPUTATION The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1027 5. RELATED WORK As there are a number of literature on automated negotiation, this section focuses on comparison with negotiation frameworks based on logic and argumentation. Sadri et al. [14] use abductive logic programming as a representation language of negotiating agents. Agents negotiate using common dialogue primitives, called dialogue moves. Each agent has an abductive logic program in which a sequence of dialogues are specified by a program, a dialogue protocol is specified as constraints, and dialogue moves are specified as abducibles. The behavior of agents is regulated by an observe-think-act cycle. Once a dialogue move is uttered by an agent, another agent that observed the utterance thinks and acts using a proof procedure. Their approach and ours both employ abductive logic programming as a platform of agent reasoning, but the use of it is quite different. First, they use abducibles to specify dialogue primitives of the form tell (utterer, receiver, subject, identifier, time), while we use abducibles to specify arbitrary permissible hypotheses to construct conditional proposals. Second, a program pre-specifies a plan to carry out in order to achieve a goal, together with available/missing resources in the context of resource-exchanging problems. This is in contrast with our method in which possible counter-proposals are newly constructed in response to a proposal made by an agent. Third, they specify a negotiation policy inside a program (as integrity constraints), while we give a protocol independent of individual agents. They provide an operational model that completely specifies the behavior of agents in terms of agent cycle. We do not provide such a complete specification of the behavior of agents. Our primary interest is to mechanize construction of proposals. Bracciali and Torroni [2] formulate abductive agents that have knowledge in abductive logic programs. To explain an observation, two agents communicate by exchanging integrity constraints. In the process of communication, an agent can revise its own integrity constraints according to the information provided by the other agent. A set IC' of integrity constraints relaxes a set IC (or IC tightens IC') if any observation that can be proved with respect to IC can also be proved with respect to IC'. For instance, IC': +--a, b, c relaxes IC: +--a, b. Thus, they use relaxation for weakening the constraints in an abductive logic program. In contrast, we use relaxation for weakening proposals and three different relaxation methods, anti-instantiation, dropping conditions, and goal replacement, are considered. Their goal is to explain an observation by revising integrity constraints of an agent through communication, while we use integrity constraints for communication to explain critiques and help other agents in making counter-proposals. Meyer et al. [11] introduce a logical framework for negotiating agents. They introduce two different modes of negotiation: concession and adaptation. They provide rational postulates to characterize negotiated outcomes between two agents, and describe methods for constructing outcomes. They provide logical conditions for negotiated outcomes to satisfy, but they do not describe a process of negotiation nor negotiation protocols. Moreover, they represent agents by classical propositional theories, which is different from our abductive logic programming framework. Foo et al. [5] model one-to-one negotiation as a one-time encounter between two extended logic programs. An agent offers an answer set of its program, and their mutual deal is regarded as a trade on their answer sets. Starting from the initial agreement set SnT for an answer set S of an agent and an answer set T of another agent, each agent extends this set to reflect its own demand while keeping consistency with demand of the other agent. Their algorithm returns new programs having answer sets which are consistent with each other and keep the agreement set. The work is extended to repeated encounters in [3]. In their framework, two agents exchange answer sets to produce a common belief set, which is different from our framework of exchanging proposals. There are a number of proposals for negotiation based 1028 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) on argumentation. An advantage of argumentation-based negotiation is that it constructs a proposal with arguments supporting the proposal [1]. The existence of arguments is useful to convince other agents of reasons why an agent offers (counter -) proposals or returns critiques. Parsons et al. [13] develop a logic of argumentation-based negotiation among BDI agents. In one-to-one negotiation, an agent A generates a proposal together with its arguments, and passes it to another agent B. The proposal is evaluated by B which attempts to build arguments against it. If it conflicts with B's interest, B informs A of its objection by sending back its attacking argument. In response to this, A tries to find an alternative way of achieving its original objective, or a way of persuading B to drop its objection. If either type of argument can be found, A will submit it to B. If B finds no reason to reject the new proposal, it will be accepted and the negotiation ends in success. Otherwise, the process is iterated. In this negotiation processes, the agent A never changes its original objective, so that negotiation ends in failure if A fails to find an alternative way of achieving the original objective. In our framework, when a proposal is rejected by another agent, an agent can weaken or change its objective by abduction and relaxation. Our framework does not have a mechanism of argumentation, but reasons for critiques can be informed by responding critique sets. Kakas and Moraitis [10] propose a negotiation protocol which integrates abduction within an argumentation framework. A proposal contains an offer corresponding to the negotiation object, together with supporting information representing conditions under which this offer is made. Supporting information is computed by abduction and is used for constructing conditional arguments during the process of negotiation. In their negotiation protocol, when an agent cannot satisfy its own goal, the agent considers the other agent's goal and searches for conditions under which the goal is acceptable. Our present approach differs from theirs in the following points. First, they use abduction to seek conditions to support arguments, while we use abduction to seek conditions for proposals to accept. Second, in their negotiation protocol, counter-proposals are chosen among candidates based on preference knowledge of an agent at meta-level, which represents policy under which an agent uses its object-level decision rules according to situations. In our framework, counter-proposals are newly constructed using abduction and relaxation. The method of construction is independent of particular negotiation protocols. As [2, 10, 14], abduction or abductive logic programming used in negotiation is mostly based on normal abduction. In contrast, our approach is based on extended abduction which cannot only introduce hypotheses but remove them from a program. This is another important difference. Relaxation and neighborhood query answering are devised to make databases cooperative with their users [4, 6]. In this sense, those techniques have the spirit similar to cooperative problem solving in multi-agent systems. As far as the authors know, however, there is no study which applies those technique to agent negotiation. 6. CONCLUSION In this paper we proposed a logical framework for negotiating agents. To construct proposals in the process of negotiation, we combined the techniques of extended abduction and relaxation. It was shown that these two operations are used for general inference rules in producing proposals. We developed a negotiation protocol between two agents based on exchange of proposals and critiques, and provided procedures for computing proposals in abductive logic programming. This enables us to realize automated negotiation on top of the existing answer set solvers. The present framework does not have a mechanism of selecting an optimal (counter -) proposal among different alternatives. To compare and evaluate proposals, an agent must have preference knowledge of candidate proposals. Further elaboration to maximize the utility of agents is left for future study.
Negotiation by Abduction and Relaxation ABSTRACT This paper studies a logical framework for automated negotiation between two agents. We suppose an agent who has a knowledge base represented by a logic program. Then, we introduce methods of constructing counter-proposals in response to proposals made by an agent. To this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases. These techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation. We provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals. 1. INTRODUCTION Negotiation usually proceeds in a series of rounds and each agent makes a proposal at every round. An agent that received a proposal responds in two ways. One is a critique which is a remark as to whether or not (parts of) the proposal is accepted. The other is a counter-proposal which is an alternative proposal made in response to a previous proposal [13]. To see these proposals in one-to-one negotiation, suppose the following negotiation dialogue between a buyer agent B and a seller agent S. (Bi (or Si) represents an utterance of B (or S) in the i-th round.) I want to get it at the price under 1200 USD. S1: We can provide a PC with the requested specification if you pay for it by cash. S2: In a normal price, the requested PC costs 1300 USD. B3: I cannot accept the price. My budget is under 1200 USD. S3: We can provide another computer with the requested specification, except that it is made by the brand b2. The price is exactly 1200 USD. B4: I do not want a PC of the brand b2. Instead, I can downgrade a driver from DVD-RW to CD-RW in my initial proposal. S4: Ok, I accept your offer. In this dialogue, in response to the opening proposal B1, the counter-proposal S1 is returned. Critiques are produced by evaluating a proposal in a knowledge base of an agent. In contrast, making counter-proposals involves generating an alternative proposal which is more favorable to the responding agent than the original one. It is known that there are two ways of producing counterproposals: extending the initial proposal or amending part of the initial proposal. According to [13], the first type appears in the dialogue: A: "I propose that you provide me with service X". B: "I propose that I provide you with service X if you provide me with service Z". The second type is in the dialogue: A: "I propose that I provide you with service Y if you provide me with service X". B: "I propose that I provide you with service X if you provide me with service Z". A negotiation proceeds by iterating such "give-andtake" dialogues until it reaches an agreement/disagreement. In those dialogues, agents generate (counter -) proposals by reasoning on their own goals or objectives. The objective of the agent A in the above dialogues is to obtain service X. The agent B proposes conditions to provide the service. In the process of negotiation, however, it may happen that agents are obliged to weaken or change their initial goals to reach a negotiated compromise. In the dialogue of a buyer agent and a seller agent presented above, a buyer agent changes its initial goal by downgrading a driver from DVD-RW to CD-RW. Such behavior is usually represented as specific meta-knowledge of an agent or specified as negotiation protocols in particular problems. Currently, there is no computational logic for automated negotiation which has general inference rules for producing (counter -) proposals. The purpose of this paper is to mechanize a process of building (counter -) proposals in one-to-one negotiation dialogues. We suppose an agent who has a knowledge base represented by a logic program. We then introduce methods for generating three different types of proposals. First, we use the technique of extended abduction in artificial intelligence [8, 15] to construct a conditional proposal as an extension of the original one. Second, we use the technique of relaxation in cooperative query answering for databases [4, 6] to construct a neighborhood proposal as an amendment of the original one. Third, combining extended abduction and relaxation, conditional neighborhood proposals are constructed as amended extensions of the original proposal. We develop a negotiation protocol between two agents based on the exchange of these counter-proposals and critiques. We also provide procedures for computing proposals in logic programming. This paper is organized as follows. Section 2 introduces a logical framework used in this paper. Section 3 presents methods for constructing proposals, and provides a negotiation protocol. Section 4 provides methods for computing proposals in logic programming. Section 5 discusses related works, and Section 6 concludes the paper. 2. PRELIMINARIES Logic programs considered in this paper are extended disjunctive programs (EDP) [7]. An EDP (or simply a program) is a set of rules of the form: not L is called an NAF-literal. The symbol ";" represents disjunction. The left-hand side of the rule is the head, and the right-hand side is the body. A disjunction of literals and a conjunction of (NAF -) literals in a rule are identified with its corresponding sets of literals. A rule r is disjunctive if head (r) contains more than one literal. A rule r is an integrity constraint if head (r) = ∅; and r is a fact if body (r) = ∅. A program is NAF-free if no rule contains NAF-literals. Two rules/literals are identified with respect to variable renaming. Given a conjunction G of (NAF -) literals, G0 denotes the conjunction obtained by applying 0 to G. A program, rule, or literal is ground if it contains no variable. The semantics of an EDP is defined by the answer set semantics [7]. Let Lit be the set of all ground literals in the language of a program. Suppose a program P and a set of literals S (⊆ Lit). Then, the reduct P S is the program which contains the ground rule head (r) ← body + (r) iff there is a rule r in Ground (P) such that body--(r) ∩ S = ∅. Given an EDP P and a set S of literals, S is an answer set of P if S = Cn (P S). A program has none, one, or multiple answer sets in general. An answer set is consistent if it is not Lit. A program P is consistent if it has a consistent answer set; otherwise, P is inconsistent. Abductive logic programming [9] introduces a mechanism of hypothetical reasoning to logic programming. An abductive framework used in this paper is the extended abduction introduced by Inoue and Sakama [8, 15]. An abductive program is a pair P, H where P is an EDP and H is a set of literals called abducibles. When a literal L ∈ H contains variables, any instance of L is also an abducible. An abductive program P, H is consistent if P is consistent. Throughout the paper, abductive programs are assumed to be consistent unless stated otherwise. Let P, H be an abductive program and G a conjunction as above. A pair (E, F) is an explanation of an observation 1. (P \ F) ∪ E has an answer set which satisfies G, 2. (P \ F) ∪ E is consistent, 3. E and F are sets of ground literals such that E ⊆ H \ P and F ⊆ H ∩ P. When (P \ F) ∪ E has an answer set S satisfying the above three conditions, S is called a belief set of an abductive pro gram P, H satisfying G (with respect to (E, F)). Extended abduction introduces/removes hypotheses to/from a program to explain an observation. Note that "normal" abduction (as in [9]) considers only introducing hypotheses to explain an observation. 5. RELATED WORK As there are a number of literature on automated negotiation, this section focuses on comparison with negotiation frameworks based on logic and argumentation. Sadri et al. [14] use abductive logic programming as a representation language of negotiating agents. Agents negotiate using common dialogue primitives, called dialogue moves. Each agent has an abductive logic program in which a sequence of dialogues are specified by a program, a dialogue protocol is specified as constraints, and dialogue moves are specified as abducibles. The behavior of agents is regulated by an observe-think-act cycle. Once a dialogue move is uttered by an agent, another agent that observed the utterance thinks and acts using a proof procedure. Their approach and ours both employ abductive logic programming as a platform of agent reasoning, but the use of it is quite different. This is in contrast with our method in which possible counter-proposals are newly constructed in response to a proposal made by an agent. Third, they specify a negotiation policy inside a program (as integrity constraints), while we give a protocol independent of individual agents. They provide an operational model that completely specifies the behavior of agents in terms of agent cycle. We do not provide such a complete specification of the behavior of agents. Our primary interest is to mechanize construction of proposals. Bracciali and Torroni [2] formulate abductive agents that have knowledge in abductive logic programs. To explain an observation, two agents communicate by exchanging integrity constraints. In the process of communication, an agent can revise its own integrity constraints according to the information provided by the other agent. For instance, IC': +--a, b, c relaxes IC: +--a, b. Thus, they use relaxation for weakening the constraints in an abductive logic program. In contrast, we use relaxation for weakening proposals and three different relaxation methods, anti-instantiation, dropping conditions, and goal replacement, are considered. Their goal is to explain an observation by revising integrity constraints of an agent through communication, while we use integrity constraints for communication to explain critiques and help other agents in making counter-proposals. Meyer et al. [11] introduce a logical framework for negotiating agents. They introduce two different modes of negotiation: concession and adaptation. They provide rational postulates to characterize negotiated outcomes between two agents, and describe methods for constructing outcomes. They provide logical conditions for negotiated outcomes to satisfy, but they do not describe a process of negotiation nor negotiation protocols. Moreover, they represent agents by classical propositional theories, which is different from our abductive logic programming framework. Foo et al. [5] model one-to-one negotiation as a one-time encounter between two extended logic programs. An agent offers an answer set of its program, and their mutual deal is regarded as a trade on their answer sets. Their algorithm returns new programs having answer sets which are consistent with each other and keep the agreement set. The work is extended to repeated encounters in [3]. In their framework, two agents exchange answer sets to produce a common belief set, which is different from our framework of exchanging proposals. There are a number of proposals for negotiation based 1028 The Sixth Intl. . Joint Conf. on argumentation. An advantage of argumentation-based negotiation is that it constructs a proposal with arguments supporting the proposal [1]. The existence of arguments is useful to convince other agents of reasons why an agent offers (counter -) proposals or returns critiques. Parsons et al. [13] develop a logic of argumentation-based negotiation among BDI agents. In one-to-one negotiation, an agent A generates a proposal together with its arguments, and passes it to another agent B. The proposal is evaluated by B which attempts to build arguments against it. If B finds no reason to reject the new proposal, it will be accepted and the negotiation ends in success. Otherwise, the process is iterated. In this negotiation processes, the agent A never changes its original objective, so that negotiation ends in failure if A fails to find an alternative way of achieving the original objective. In our framework, when a proposal is rejected by another agent, an agent can weaken or change its objective by abduction and relaxation. Our framework does not have a mechanism of argumentation, but reasons for critiques can be informed by responding critique sets. Kakas and Moraitis [10] propose a negotiation protocol which integrates abduction within an argumentation framework. A proposal contains an offer corresponding to the negotiation object, together with supporting information representing conditions under which this offer is made. Supporting information is computed by abduction and is used for constructing conditional arguments during the process of negotiation. In their negotiation protocol, when an agent cannot satisfy its own goal, the agent considers the other agent's goal and searches for conditions under which the goal is acceptable. Our present approach differs from theirs in the following points. First, they use abduction to seek conditions to support arguments, while we use abduction to seek conditions for proposals to accept. Second, in their negotiation protocol, counter-proposals are chosen among candidates based on preference knowledge of an agent at meta-level, which represents policy under which an agent uses its object-level decision rules according to situations. In our framework, counter-proposals are newly constructed using abduction and relaxation. The method of construction is independent of particular negotiation protocols. As [2, 10, 14], abduction or abductive logic programming used in negotiation is mostly based on normal abduction. In contrast, our approach is based on extended abduction which cannot only introduce hypotheses but remove them from a program. This is another important difference. Relaxation and neighborhood query answering are devised to make databases cooperative with their users [4, 6]. As far as the authors know, however, there is no study which applies those technique to agent negotiation. 6. CONCLUSION In this paper we proposed a logical framework for negotiating agents. To construct proposals in the process of negotiation, we combined the techniques of extended abduction and relaxation. It was shown that these two operations are used for general inference rules in producing proposals. We developed a negotiation protocol between two agents based on exchange of proposals and critiques, and provided procedures for computing proposals in abductive logic programming. This enables us to realize automated negotiation on top of the existing answer set solvers. The present framework does not have a mechanism of selecting an optimal (counter -) proposal among different alternatives. To compare and evaluate proposals, an agent must have preference knowledge of candidate proposals. Further elaboration to maximize the utility of agents is left for future study.
I-62
A Q-decomposition and Bounded RTDP Approach to Resource Allocation
This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, a Q-decomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. The Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider. On the other hand, when the resources are available to all agents, no Q-decomposition is possible and we use heuristic search. In particular, the bounded Real-time Dynamic Programming (bounded rtdp) is used. Bounded rtdp concentrates the planning on significant states only and prunes the action space. The pruning is accomplished by proposing tight upper and lower bounds on the value function.
[ "q-decomposit", "resourc alloc", "resourc manag", "reward separ agent", "heurist search", "real-time dynam program", "complex stochast resourc alloc problem", "plan agent", "markov decis process", "stochast environ", "margin revenu bound", "margin revenu" ]
[ "P", "P", "P", "P", "P", "M", "R", "R", "U", "M", "M", "U" ]
A Q-decomposition and Bounded RTDP Approach to Resource Allocation Pierrick Plamondon and Brahim Chaib-draa Computer Science & Software Engineering Dept Laval University Québec, Canada {plamon, chaib}@damas. ift.ulaval.ca Abder Rezak Benaskeur Decision Support Systems Section Defence R&D Canada - Valcartier Québec, Canada abderrezak.benaskeur@drdc-rddc.gc.ca ABSTRACT This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. The Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider. On the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search. In particular, the bounded Real-time Dynamic Programming (bounded rtdp) is used. Bounded rtdp concentrates the planning on significant states only and prunes the action space. The pruning is accomplished by proposing tight upper and lower bounds on the value function. Categories and Subject Descriptors I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence. General Terms Algorithms, Performance, Experimentation. 1. INTRODUCTION This paper aims to contribute to solve complex stochastic resource allocation problems. In general, resource allocation problems are known to be NP-Complete [12]. In such problems, a scheduling process suggests the action (i.e. resources to allocate) to undertake to accomplish certain tasks, according to the perfectly observable state of the environment. When executing an action to realize a set of tasks, the stochastic nature of the problem induces probabilities on the next visited state. In general, the number of states is the combination of all possible specific states of each task and available resources. In this case, the number of possible actions in a state is the combination of each individual possible resource assignment to the tasks. The very high number of states and actions in this type of problem makes it very complex. There can be many types of resource allocation problems. Firstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent. A second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. To solve this problem efficiently, we adapt Qdecomposition proposed by Russell and Zimdars [9]. In our Q-decomposition approach, a planning agent manages each task and all agents have to share the limited resources. The planning process starts with the initial state s0. In s0, each agent computes their respective Q-value. Then, the planning agents are coordinated through an arbitrator to find the highest global Q-value by adding the respective possible Q-values of each agents. When implemented with heuristic search, since the number of states and actions to consider when computing the optimal policy is exponentially reduced compared to other known approaches, Q-decomposition allows to formulate the first optimal decomposed heuristic search algorithm in a stochastic environments. On the other hand, when the resources are available to all agents, no Q-decomposition is possible. A common way of addressing this large stochastic problem is by using Markov Decision Processes (mdps), and in particular real-time search where many algorithms have been developed recently. For instance Real-Time Dynamic Programming (rtdp) [1], lrtdp [4], hdp [3], and lao [5] are all state-of-the-art heuristic search approaches in a stochastic environment. Because of its anytime quality, an interesting approach is rtdp introduced by Barto et al. [1] which updates states in trajectories from an initial state s0 to a goal state sg. rtdp is used in this paper to solve efficiently a constrained resource allocation problem. rtdp is much more effective if the action space can be pruned of sub-optimal actions. To do this, McMahan et 1212 978-81-904262-7-5 (RPS) c 2007 IFAAMAS al. [6], Smith and Simmons [11], and Singh and Cohn [10] proposed solving a stochastic problem using a rtdp type heuristic search with upper and lower bounds on the value of states. McMahan et al. [6] and Smith and Simmons [11] suggested, in particular, an efficient trajectory of state updates to further speed up the convergence, when given upper and lower bounds. This efficient trajectory of state updates can be combined to the approach proposed here since this paper focusses on the definition of tight bounds, and efficient state update for a constrained resource allocation problem. On the other hand, the approach by Singh and Cohn is suitable to our case, and extended in this paper using, in particular, the concept of marginal revenue [7] to elaborate tight bounds. This paper proposes new algorithms to define upper and lower bounds in the context of a rtdp heuristic search approach. Our marginal revenue bounds are compared theoretically and empirically to the bounds proposed by Singh and Cohn. Also, even if the algorithm used to obtain the optimal policy is rtdp, our bounds can be used with any other algorithm to solve an mdp. The only condition on the use of our bounds is to be in the context of stochastic constrained resource allocation. The problem is now modelled. 2. PROBLEM FORMULATION A simple resource allocation problem is one where there are the following two tasks to realize: ta1 = {wash the dishes}, and ta2 = {clean the floor}. These two tasks are either in the realized state, or not realized state. To realize the tasks, two type of resources are assumed: res1 = {brush}, and res2 = {detergent}. A computer has to compute the optimal allocation of these resources to cleaner robots to realize their tasks. In this problem, a state represents a conjunction of the particular state of each task, and the available resources. The resources may be constrained by the amount that may be used simultaneously (local constraint), and in total (global constraint). Furthermore, the higher is the number of resources allocated to realize a task, the higher is the expectation of realizing the task. For this reason, when the specific states of the tasks change, or when the number of available resources changes, the value of this state may change. When executing an action a in state s, the specific states of the tasks change stochastically, and the remaining resource are determined with the resource available in s, subtracted from the resources used by action a, if the resource is consumable. Indeed, our model may consider consumable and non-consumable resource types. A consumable resource type is one where the amount of available resource is decreased when it is used. On the other hand, a nonconsumable resource type is one where the amount of available resource is unchanged when it is used. For example, a brush is a non-consumable resource, while the detergent is a consumable resource. 2.1 Resource Allocation as a MDPs In our problem, the transition function and the reward function are both known. A Markov Decision Process (mdp) framework is used to model our stochastic resource allocation problem. mdps have been widely adopted by researchers today to model a stochastic process. This is due to the fact that mdps provide a well-studied and simple, yet very expressive model of the world. An mdp in the context of a resource allocation problem with limited resources is defined as a tuple Res, T a, S, A, P, W, R, , where: • Res = res1, ..., res|Res| is a finite set of resource types available for a planning process. Each resource type may have a local resource constraint Lres on the number that may be used in a single step, and a global resource constraint Gres on the number that may be used in total. The global constraint only applies for consumable resource types (Resc) and the local constraints always apply to consumable and nonconsumable resource types. • T a is a finite set of tasks with ta ∈ T a to be accomplished. • S is a finite set of states with s ∈ S. A state s is a tuple T a, res1, ..., res|Resc| , which is the characteristic of each unaccomplished task ta ∈ T a in the environment, and the available consumable resources. sta is the specific state of task ta. Also, S contains a non empty set sg ⊆ S of goal states. A goal state is a sink state where an agent stays forever. • A is a finite set of actions (or assignments). The actions a ∈ A(s) applicable in a state are the combination of all resource assignments that may be executed, according to the state s. In particular, a is simply an allocation of resources to the current tasks, and ata is the resource allocation to task ta. The possible actions are limited by Lres and Gres. • Transition probabilities Pa(s |s) for s ∈ S and a ∈ A(s). • W = [wta] is the relative weight (criticality) of each task. • State rewards R = [rs] : ta∈T a rsta ← sta × wta. The relative reward of the state of a task rsta is the product of a real number sta by the weight factor wta. For our problem, a reward of 1 × wta is given when the state of a task (sta) is in an achieved state, and 0 in all other cases. • A discount (preference) factor γ, which is a real number between 0 and 1. A solution of an mdp is a policy π mapping states s into actions a ∈ A(s). In particular, πta(s) is the action (i.e. resources to allocate) that should be executed on task ta, considering the global state s. In this case, an optimal policy is one that maximizes the expected total reward for accomplishing all tasks. The optimal value of a state, V (s), is given by: V (s) = R(s) + max a∈A(s) γ s ∈S Pa(s |s)V (s ) (1) where the remaining consumable resources in state s are Resc \ res(a), where res(a) are the consumable resources used by action a. Indeed, since an action a is a resource assignment, Resc \ res(a) is the new set of available resources after the execution of action a. Furthermore, one may compute the Q-Values Q(a, s) of each state action pair using the The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1213 following equation: Q(a, s) = R(s) + γ s ∈S Pa(s |s) max a ∈A(s ) Q(a , s ) (2) where the optimal value of a state is V (s) = max a∈A(s) Q(a, s). The policy is subjected to the local resource constraints res(π(s)) ≤ Lres∀ s ∈ S , and ∀ res ∈ Res. The global constraint is defined according to all system trajectories tra ∈ T RA. A system trajectory tra is a possible sequence of state-action pairs, until a goal state is reached under the optimal policy π. For example, state s is entered, which may transit to s or to s , according to action a. The two possible system trajectories are (s, a), (s ) and (s, a), (s ) . The global resource constraint is res(tra) ≤ Gres∀ tra ∈ T RA ,and ∀ res ∈ Resc where res(tra) is a function which returns the resources used by trajectory tra. Since the available consumable resources are represented in the state space, this condition is verified by itself. In other words, the model is Markovian as the history has not to be considered in the state space. Furthermore, the time is not considered in the model description, but it may also include a time horizon by using a finite horizon mdp. Since resource allocation in a stochastic environment is NP-Complete, heuristics should be employed. Q-decomposition which decomposes a planning problem to many agents to reduce the computational complexity associated to the state and/or action spaces is now introduced. 2.2 Q-decomposition for Resource Allocation There can be many types of resource allocation problems. Firstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent. A second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. For instance, a group of agents which manages the oil consummated by a country falls in this group. These agents desire to maximize their specific reward by consuming the right amount of oil. However, all the agents are penalized when an agent consumes oil because of the pollution it generates. Another example of this type comes from our problem of interest, explained in Section 3, which is a naval platform which must counter incoming missiles (i.e. tasks) by using its resources (i.e. weapons, movements). In some scenarios, it may happens that the missiles can be classified in two types: Those requiring a set of resources Res1 and those requiring a set of resources Res2. This can happen depending on the type of missiles, their range, and so on. In this case, two agents can plan for both set of tasks to determine the policy. However, there are interaction between the resource of Res1 and Res2, so that certain combination of resource cannot be assigned. IN particular, if an agent i allocate resources Resi to the first set of tasks T ai, and agent i allocate resources Resi to second set of tasks T ai , the resulting policy may include actions which cannot be executed together. To result these conflicts, we use Q-decomposition proposed by Russell and Zimdars [9] in the context of reinforcement learning. The primary assumption underlying Qdecomposition is that the overall reward function R can be additively decomposed into separate rewards Ri for each distinct agent i ∈ Ag, where |Ag| is the number of agents. That is, R = i∈Ag Ri. It requires each agent to compute a value, from its perspective, for every action. To coordinate with each other, each agent i reports its action values Qi(ai, si) for each state si ∈ Si to an arbitrator at each learning iteration. The arbitrator then chooses an action maximizing the sum of the agent Q-values for each global state s ∈ S. The next time state s is updated, an agent i considers the value as its respective contribution, or Q-value, to the global maximal Q-value. That is, Qi(ai, si) is the value of a state such that it maximizes maxa∈A(s) i∈Ag Qi(ai, si). The fact that the agents use a determined Q-value as the value of a state is an extension of the Sarsa on-policy algorithm [8] to Q-decomposition. Russell and Zimdars called this approach local Sarsa. In this way, an ideal compromise can be found for the agents to reach a global optimum. Indeed, rather than allowing each agent to choose the successor action, each agent i uses the action ai executed by the arbitrator in the successor state si: Qi(ai, si) = Ri(si) + γ si∈Si Pai (si|si)Qi(ai, si) (3) where the remaining consumable resources in state si are Resci \ resi(ai) for a resource allocation problem. Russell and Zimdars [9] demonstrated that local Sarsa converges to the optimum. Also, in some cases, this form of agent decomposition allows the local Q-functions to be expressed by a much reduced state and action space. For our resource allocation problem described briefly in this section, Q-decomposition can be applied to generate an optimal solution. Indeed, an optimal Bellman backup can be applied in a state as in Algorithm 1. In Line 5 of the Qdec-backup function, each agent managing a task computes its respective Q-value. Here, Qi (ai, s ) determines the optimal Q-value of agent i in state s . An agent i uses as the value of a possible state transition s the Q-value for this agent which determines the maximal global Q-value for state s as in the original Q-decomposition approach. In brief, for each visited states s ∈ S, each agent computes its respective Q-values with respect to the global state s. So the state space is the joint state space of all agents. Some of the gain in complexity to use Q-decomposition resides in the si∈Si Pai (si|s) part of the equation. An agent considers as a possible state transition only the possible states of the set of tasks it manages. Since the number of states is exponential with the number of tasks, using Q-decomposition should reduce the planning time significantly. Furthermore, the action space of the agents takes into account only their available resources which is much less complex than a standard action space, which is the combination of all possible resource allocation in a state for all agents. Then, the arbitrator functionalities are in Lines 8 to 20. The global Q-value is the sum of the Q-values produced by each agent managing each task as shown in Line 11, considering the global action a. In this case, when an action of an agent i cannot be executed simultaneously with an action of another agent i , the global action is simply discarded from the action space A(s). Line 14 simply allocate the current value with respect to the highest global Q-value, as in a standard Bellman backup. Then, the optimal policy and Q-value of each agent is updated in Lines 16 and 17 to the sub-actions ai and specific Q-values Qi(ai, s) of each agent 1214 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) for action a. Algorithm 1 The Q-decomposition Bellman Backup. 1: Function Qdec-backup(s) 2: V (s) ← 0 3: for all i ∈ Ag do 4: for all ai ∈ Ai(s) do 5: Qi(ai, s) ← Ri(s) + γ si ∈Si Pai (si|s)Qi (ai, s ) {where Qi (ai, s ) = hi(s ) when s is not yet visited, and s has Resci \ resi(ai) remaining consumable resources for each agent i} 6: end for 7: end for 8: for all a ∈ A(s) do 9: Q(a, s) ← 0 10: for all i ∈ Ag do 11: Q(a, s) ← Q(a, s) + Qi(ai, s) 12: end for 13: if Q(a, s) > V (s) then 14: V (s) ← Q(a, s) 15: for all i ∈ Ag do 16: πi(s) ← ai 17: Qi (ai, s) ← Qi(ai, s) 18: end for 19: end if 20: end for A standard Bellman backup has a complexity of O(|A| × |SAg|), where |SAg| is the number of joint states for all agents excluding the resources, and |A| is the number of joint actions. On the other hand, the Q-decomposition Bellman backup has a complexity of O((|Ag| × |Ai| × |Si)|) + (|A| × |Ag|)), where |Si| is the number of states for an agent i, excluding the resources and |Ai| is the number of actions for an agent i. Since |SAg| is combinatorial with the number of tasks, so |Si| |S|. Also, |A| is combinatorial with the number of resource types. If the resources are already shared among the agents, the number of resource type for each agent will usually be lower than the set of all available resource types for all agents. In these circumstances, |Ai| |A|. In a standard Bellman backup, |A| is multiplied by |SAg|, which is much more complex than multiplying |A| by |Ag| with the Q-decomposition Bellman backup. Thus, the Q-decomposition Bellman backup is much less complex than a standard Bellman backup. Furthermore, the communication cost between the agents and the arbitrator is null since this approach does not consider a geographically separated problem. However, when the resources are available to all agents, no Q-decomposition is possible. In this case, Bounded RealTime Dynamic Programming (bounded-rtdp) permits to focuss the search on relevant states, and to prune the action space A by using lower and higher bound on the value of states. bounded-rtdp is now introduced. 2.3 Bounded-RTDP Bonet and Geffner [4] proposed lrtdp as an improvement to rtdp [1]. lrtdp is a simple dynamic programming algorithm that involves a sequence of trial runs, each starting in the initial state s0 and ending in a goal or a solved state. Each lrtdp trial is the result of simulating the policy π while updating the values V (s) using a Bellman backup (Equation 1) over the states s that are visited. h(s) is a heuristic which define an initial value for state s. This heuristic has to be admissible - The value given by the heuristic has to overestimate (or underestimate) the optimal value V (s) when the objective function is maximized (or minimized). For example, an admissible heuristic for a stochastic shortest path problem is the solution of a deterministic shortest path problem. Indeed, since the problem is stochastic, the optimal value is lower than for the deterministic version. It has been proven that lrtdp, given an admissible initial heuristic on the value of states cannot be trapped in loops, and eventually yields optimal values [4]. The convergence is accomplished by means of a labeling procedure called checkSolved(s, ). This procedure tries to label as solved each traversed state in the current trajectory. When the initial state is labelled as solved, the algorithm has converged. In this section, a bounded version of rtdp (boundedrtdp) is presented in Algorithm 2 to prune the action space of sub-optimal actions. This pruning enables to speed up the convergence of lrtdp. bounded-rtdp is similar to rtdp except there are two distinct initial heuristics for unvisited states s ∈ S; hL(s) and hU (s). Also, the checkSolved(s, ) procedure can be omitted because the bounds can provide the labeling of a state as solved. On the one hand, hL(s) defines a lower bound on the value of s such that the optimal value of s is higher than hL(s). For its part, hU (s) defines an upper bound on the value of s such that the optimal value of s is lower than hU (s). The values of the bounds are computed in Lines 3 and 4 of the bounded-backup function. Computing these two Q-values is made simultaneously as the state transitions are the same for both Q-values. Only the values of the state transitions change. Thus, having to compute two Q-values instead of one does not augment the complexity of the approach. In fact, Smith and Simmons [11] state that the additional time to compute a Bellman backup for two bounds, instead of one, is no more than 10%, which is also what we obtained. In particular, L(s) is the lower bound of state s, while U(s) is the upper bound of state s. Similarly, QL(a, s) is the Q-value of the lower bound of action a in state s, while QU (a, s) is the Q-value of the upper bound of action a in state s. Using these two bounds allow significantly reducing the action space A. Indeed, in Lines 5 and 6 of the bounded-backup function, if QU (a, s) ≤ L(s) then action a may be pruned from the action space of s. In Line 13 of this function, a state can be labeled as solved if the difference between the lower and upper bounds is lower than . When the execution goes back to the bounded-rtdp function, the next state in Line 10 has a fixed number of consumable resources available Resc, determined in Line 9. In brief, pickNextState(res) selects a none-solved state s reachable under the current policy which has the highest Bellman error (|U(s) − L(s)|). Finally, in Lines 12 to 15, a backup is made in a backward fashion on all visited state of a trajectory, when this trajectory has been made. This strategy has been proven as efficient [11] [6]. As discussed by Singh and Cohn [10], this type of algorithm has a number of desirable anytime characteristics: if an action has to be picked in state s before the algorithm has converged (while multiple competitive actions remains), the action with the highest lower bound is picked. Since the upper bound for state s is known, it may be estimated The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1215 Algorithm 2 The bounded-rtdp algorithm. Adapted from [4] and [10]. 1: Function bounded-rtdp(S) 2: returns a value function V 3: repeat 4: s ← s0 5: visited ← null 6: repeat 7: visited.push(s) 8: bounded-backup(s) 9: Resc ← Resc \ {π(s)} 10: s ← s.pickNextState(Resc) 11: until s is a goal 12: while visited = null do 13: s ← visited.pop() 14: bounded-backup(s) 15: end while 16: until s0 is solved or |A(s)| = 1 ∀ s ∈ S reachable from s0 17: return V Algorithm 3 The bounded Bellman backup. 1: Function bounded-backup(s) 2: for all a ∈ A(s) do 3: QU (a, s) ← R(s) + γ s ∈S Pa(s |s)U(s ) 4: QL(a, s) ← R(s) + γ s ∈S Pa(s |s)L(s ) {where L(s ) ← hL(s ) and U(s ) ← hU (s ) when s is not yet visited and s has Resc \ res(a) remaining consumable resources} 5: if QU (a, s) ≤ L(s) then 6: A(s) ← A(s) \ res(a) 7: end if 8: end for 9: L(s) ← max a∈A(s) QL(a, s) 10: U(s) ← max a∈A(s) QU (a, s) 11: π(s) ← arg max a∈A(s) QL(a, s) 12: if |U(s) − L(s)| < then 13: s ← solved 14: end if how far the lower bound is from the optimal. If the difference between the lower and upper bound is too high, one can choose to use another greedy algorithm of one``s choice, which outputs a fast and near optimal solution. Furthermore, if a new task dynamically arrives in the environment, it can be accommodated by redefining the lower and upper bounds which exist at the time of its arrival. Singh and Cohn [10] proved that an algorithm that uses admissible lower and upper bounds to prune the action space is assured of converging to an optimal solution. The next sections describe two separate methods to define hL(s) and hU (s). First of all, the method of Singh and Cohn [10] is briefly described. Then, our own method proposes tighter bounds, thus allowing a more effective pruning of the action space. 2.4 Singh and Cohn``s Bounds Singh and Cohn [10] defined lower and upper bounds to prune the action space. Their approach is pretty straightforward. First of all, a value function is computed for all tasks to realize, using a standard rtdp approach. Then, using these task-value functions, a lower bound hL, and upper bound hU can be defined. In particular, hL(s) = max ta∈T a Vta(sta), and hU (s) = ta∈T a Vta(sta). For readability, the upper bound by Singh and Cohn is named SinghU, and the lower bound is named SinghL. The admissibility of these bounds has been proven by Singh and Cohn, such that, the upper bound always overestimates the optimal value of each state, while the lower bound always underestimates the optimal value of each state. To determine the optimal policy π, Singh and Cohn implemented an algorithm very similar to bounded-rtdp, which uses the bounds to initialize L(s) and U(s). The only difference between bounded-rtdp, and the rtdp version of Singh and Cohn is in the stopping criteria. Singh and Cohn proposed that the algorithm terminates when only one competitive action remains for each state, or when the range of all competitive actions for any state are bounded by an indifference parameter . bounded-rtdp labels states for which |U(s) − L(s)| < as solved and the convergence is reached when s0 is solved or when only one competitive action remains for each state. This stopping criteria is more effective since it is similar to the one used by Smith and Simmons [11] and McMahan et al. brtdp [6]. In this paper, the bounds defined by Singh and Cohn and implemented using bounded-rtdp define the Singh-rtdp approach. The next sections propose to tighten the bounds of Singh-rtdp to permit a more effective pruning of the action space. 2.5 Reducing the Upper Bound SinghU includes actions which may not be possible to execute because of resource constraints, which overestimates the upper bound. To consider only possible actions, our upper bound, named maxU is introduced: hU (s) = max a∈A(s) ta∈T a Qta(ata, sta) (4) where Qta(ata, sta) is the Q-value of task ta for state sta, and action ata computed using a standard lrtdp approach. Theorem 2.1. The upper bound defined by Equation 4 is admissible. Proof: The local resource constraints are satisfied because the upper bound is computed using all global possible actions a. However, hU (s) still overestimates V (s) because the global resource constraint is not enforced. Indeed, each task may use all consumable resources for its own purpose. Doing this produces a higher value for each task, than the one obtained when planning for all tasks globally with the shared limited resources. Computing the maxU bound in a state has a complexity of O(|A| × |T a|), and O(|T a|) for SinghU. A standard Bellman backup has a complexity of O(|A| × |S|). Since |A|×|T a| |A|×|S|, the computation time to determine the upper bound of a state, which is done one time for each visited state, is much less than the computation time required to compute a standard Bellman backup for a state, which is usually done many times for each visited state. Thus, the computation time of the upper bound is negligible. 1216 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.6 Increasing the Lower Bound The idea to increase SinghL is to allocate the resources a priori among the tasks. When each task has its own set of resources, each task may be solved independently. The lower bound of state s is hL(s) = ta∈T a Lowta(sta), where Lowta(sta) is a value function for each task ta ∈ T a, such that the resources have been allocated a priori. The allocation a priori of the resources is made using marginal revenue, which is a highly used concept in microeconomics [7], and has recently been used for coordination of a Decentralized mdp [2]. In brief, marginal revenue is the extra revenue that an additional unit of product will bring to a firm. Thus, for a stochastic resource allocation problem, the marginal revenue of a resource is the additional expected value it involves. The marginal revenue of a resource res for a task ta in a state sta is defined as following: mrta(sta) = max ata∈A(sta) Qta(ata, sta)− max ata∈A(sta) Qta(ata|res /∈ ata, sta) (5) The concept of marginal revenue of a resource is used in Algorithm 4 to allocate the resources a priori among the tasks which enables to define the lower bound value of a state. In Line 4 of the algorithm, a value function is computed for all tasks in the environment using a standard lrtdp [4] approach. These value functions, which are also used for the upper bound, are computed considering that each task may use all available resources. The Line 5 initializes the valueta variable. This variable is the estimated value of each task ta ∈ T a. In the beginning of the algorithm, no resources are allocated to a specific task, thus the valueta variable is initialized to 0 for all ta ∈ T a. Then, in Line 9, a resource type res (consumable or non-consumable) is selected to be allocated. Here, a domain expert may separate all available resources in many types or parts to be allocated. The resources are allocated in the order of its specialization. In other words, the more a resource is efficient on a small group of tasks, the more it is allocated early. Allocating the resources in this order improves the quality of the resulting lower bound. The Line 12 computes the marginal revenue of a consumable resource res for each task ta ∈ T a. For a non-consumable resource, since the resource is not considered in the state space, all other reachable states from sta consider that the resource res is still usable. The approach here is to sum the difference between the real value of a state to the maximal Q-value of this state if resource res cannot be used for all states in a trajectory given by the policy of task ta. This heuristic proved to obtain good results, but other ones may be tried, for example Monte-Carlo simulation. In Line 21, the marginal revenue is updated in function of the resources already allocated to each task. R(sgta ) is the reward to realize task ta. Thus, Vta(sta)−valueta R(sgta ) is the residual expected value that remains to be achieved, knowing current allocation to task ta, and normalized by the reward of realizing the tasks. The marginal revenue is multiplied by this term to indicate that, the more a task has a high residual value, the more its marginal revenue is going to be high. Then, a task ta is selected in Line 23 with the highest marginal revenue, adjusted with residual value. In Line 24, the resource type res is allocated to the group of resources Resta of task ta. Afterwards, Line 29 recomAlgorithm 4 The marginal revenue lower bound algorithm. 1: Function revenue-bound(S) 2: returns a lower bound LowT a 3: for all ta ∈ T a do 4: Vta ←lrtdp(Sta) 5: valueta ← 0 6: end for 7: s ← s0 8: repeat 9: res ← Select a resource type res ∈ Res 10: for all ta ∈ T a do 11: if res is consumable then 12: mrta(sta) ← Vta(sta) − Vta(sta(Res \ res)) 13: else 14: mrta(sta) ← 0 15: repeat 16: mrta(sta) ← mrta(sta) + Vta(sta)max (ata∈A(sta)|res/∈ata) Qta(ata, sta) 17: sta ← sta.pickNextState(Resc) 18: until sta is a goal 19: s ← s0 20: end if 21: mrrvta(sta) ← mrta(sta) × Vta(sta)−valueta R(sgta ) 22: end for 23: ta ← Task ta ∈ T a which maximize mrrvta(sta) 24: Resta ← Resta {res} 25: temp ← ∅ 26: if res is consumable then 27: temp ← res 28: end if 29: valueta ← valueta + ((Vta(sta) − valueta)× max ata∈A(sta,res) Qta(ata,sta(temp)) Vta(sta) ) 30: until all resource types res ∈ Res are assigned 31: for all ta ∈ T a do 32: Lowta ←lrtdp(Sta, Resta) 33: end for 34: return LowT a putes valueta. The first part of the equation to compute valueta represents the expected residual value for task ta. This term is multiplied by max ata∈A(sta) Qta(ata,sta(res)) Vta(sta) , which is the ratio of the efficiency of resource type res. In other words, valueta is assigned to valueta + (the residual value × the value ratio of resource type res). For a consumable resource, the Q-value consider only resource res in the state space, while for a non-consumable resource, no resources are available. All resource types are allocated in this manner until Res is empty. All consumable and non-consumable resource types are allocated to each task. When all resources are allocated, the lower bound components Lowta of each task are computed in Line 32. When the global solution is computed, the lower bound is as follow: hL(s) = max(SinghL, max a∈A(s) ta∈T a Lowta(sta)) (6) We use the maximum of the SinghL bound and the sum of the lower bound components Lowta, thus marginalrevenue ≥ SinghL. In particular, the SinghL bound may The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1217 be higher when a little number of tasks remain. As the components Lowta are computed considering s0; for example, if in a subsequent state only one task remains, the bound of SinghL will be higher than any of the Lowta components. The main difference of complexity between SinghL and revenue-bound is in Line 32 where a value for each task has to be computed with the shared resource. However, since the resource are shared, the state space and action space is greatly reduced for each task, reducing greatly the calculus compared to the value functions computed in Line 4 which is done for both SinghL and revenue-bound. Theorem 2.2. The lower bound of Equation 6 is admissible. Proof: Lowta(sta) is computed with the resource being shared. Summing the Lowta(sta) value functions for each ta ∈ T a does not violates the local and global resource constraints. Indeed, as the resources are shared, the tasks cannot overuse them. Thus, hL(s) is a realizable policy, and an admissible lower bound. 3. DISCUSSION AND EXPERIMENTS The domain of the experiments is a naval platform which must counter incoming missiles (i.e. tasks) by using its resources (i.e. weapons, movements). For the experiments, 100 randomly resource allocation problems were generated for each approach, and possible number of tasks. In our problem, |Sta| = 4, thus each task can be in four distinct states. There are two types of states; firstly, states where actions modify the transition probabilities; and then, there are goal states. The state transitions are all stochastic because when a missile is in a given state, it may always transit in many possible states. In particular, each resource type has a probability to counter a missile between 45% and 65% depending on the state of the task. When a missile is not countered, it transits to another state, which may be preferred or not to the current state, where the most preferred state for a task is when it is countered. The effectiveness of each resource is modified randomly by ±15% at the start of a scenario. There are also local and global resource constraints on the amount that may be used. For the local constraints, at most 1 resource of each type can be allocated to execute tasks in a specific state. This constraint is also present on a real naval platform because of sensor and launcher constraints and engagement policies. Furthermore, for consumable resources, the total amount of available consumable resource is between 1 and 2 for each type. The global constraint is generated randomly at the start of a scenario for each consumable resource type. The number of resource type has been fixed to 5, where there are 3 consumable resource types and 2 non-consumable resources types. For this problem a standard lrtdp approach has been implemented. A simple heuristic has been used where the value of an unvisited state is assigned as the value of a goal state such that all tasks are achieved. This way, the value of each unvisited state is assured to overestimate its real value since the value of achieving a task ta is the highest the planner may get for ta. Since this heuristic is pretty straightforward, the advantages of using better heuristics are more evident. Nevertheless, even if the lrtdp approach uses a simple heuristic, still a huge part of the state space is not visited when computing the optimal policy. The approaches described in this paper are compared in Figures 1 and 2. Lets summarize these approaches here: • Qdec-lrtdp: The backups are computed using the Qdec-backup function (Algorithm 1), but in a lrtdp context. In particular the updates made in the checkSolved function are also made using the the Qdecbackup function. • lrtdp-up: The upper bound of maxU is used for lrtdp. • Singh-rtdp: The SinghL and SinghU bounds are used for bounded-rtdp. • mr-rtdp: The revenue-bound and maxU bounds are used for bounded-rtdp. To implement Qdec-lrtdp, we divided the set of tasks in two equal parts. The set of task T ai, managed by agent i, can be accomplished with the set of resources Resi, while the second set of task T ai , managed by agent Agi , can be accomplished with the set of resources Resi . Resi had one consumable resource type and one non-consumable resource type, while Resi had two consumable resource types and one non-consumable resource type. When the number of tasks is odd, one more task was assigned to T ai . There are constraint between the group of resource Resi and Resi such that some assignments are not possible. These constraints are managed by the arbitrator as described in Section 2.2. Q-decomposition permits to diminish the planning time significantly in our problem settings, and seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent. To compute the lower bound of revenue-bound, all available resources have to be separated in many types or parts to be allocated. For our problem, we allocated each resource of each type in the order of of its specialization like we said when describing the revenue-bound function. In terms of experiments, notice that the lrtdp lrtdp-up and approaches for resource allocation, which doe not prune the action space, are much more complex. For instance, it took an average of 1512 seconds to plan for the lrtdp-up approach with six tasks (see Figure 1). The Singh-rtdp approach diminished the planning time by using a lower and upper bound to prune the action space. mr-rtdp further reduce the planning time by providing very tight initial bounds. In particular, Singh-rtdp needed 231 seconds in average to solve problem with six tasks and mr-rtdp required 76 seconds. Indeed, the time reduction is quite significant compared to lrtdp-up, which demonstrates the efficiency of using bounds to prune the action space. Furthermore, we implemented mr-rtdp with the SinghU bound, and this was slightly less efficient than with the maxU bound. We also implemented mr-rtdp with the SinghL bound, and this was slightly more efficient than Singh-rtdp. From these results, we conclude that the difference of efficiency between mr-rtdp and Singh-rtdp is more attributable to the marginal-revenue lower bound that to the maxU upper bound. Indeed, when the number of task to execute is high, the lower bounds by Singh-rtdp takes the values of a single task. On the other hand, the lower bound of mr-rtdp takes into account the value of all 1218 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0.01 0.1 1 10 100 1000 10000 100000 1 2 3 4 5 6 7 8 9 10 11 12 13 Timeinseconds Number of tasks LRTDP QDEC-LRTDP Figure 1: Efficiency of Q-decomposition LRTDP and LRTDP. 0.01 0.1 1 10 100 1000 10000 1 2 3 4 5 6 7 8 Timeinseconds Number of tasks LRTDP LRTDP-up Singh-RTDP MR-RTDP Figure 2: Efficiency of MR-RTDP compared to SINGH-RTDP. task by using a heuristic to distribute the resources. Indeed, an optimal allocation is one where the resources are distributed in the best way to all tasks, and our lower bound heuristically does that. 4. CONCLUSION The experiments have shown that Q-decomposition seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent. On the other hand, when the available resource are shared, no Q-decomposition is possible and we proposed tight bounds for heuristic search. In this case, the planning time of bounded-rtdp, which prunes the action space, is significantly lower than for lrtdp. Furthermore, The marginal revenue bound proposed in this paper compares favorably to the Singh and Cohn [10] approach. boundedrtdp with our proposed bounds may apply to a wide range of stochastic environments. The only condition for the use our bounds is that each task possesses consumable and/or non-consumable limited resources. An interesting research avenue would be to experiment our bounds with other heuristic search algorithms. For instance, frtdp [11], and brtdp [6] are both efficient heuristic search algorithms. In particular, both these approaches proposed an efficient state trajectory updates, when given upper and lower bounds. Our tight bounds would enable, for both frtdp and brtdp, to reduce the number of backup to perform before convergence. Finally, the bounded-rtdp function prunes the action space when QU (a, s) ≤ L(s), as Singh and Cohn [10] suggested. frtdp and brtdp could also prune the action space in these circumstances to further reduce their planning time. 5. REFERENCES [1] A. Barto, S. Bradtke, and S. Singh. Learning to act using real-time dynamic programming. Artificial Intelligence, 72(1):81-138, 1995. [2] A. Beynier and A. I. Mouaddib. An iterative algorithm for solving constrained decentralized markov decision processes. In Proceeding of the Twenty-First National Conference on Artificial Intelligence (AAAI-06), 2006. [3] B. Bonet and H. Geffner. Faster heuristic search algorithms for planning with uncertainty and full feedback. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI-03), August 2003. [4] B. Bonet and H. Geffner. Labeled lrtdp approach: Improving the convergence of real-time dynamic programming. In Proceeding of the Thirteenth International Conference on Automated Planning & Scheduling (ICAPS-03), pages 12-21, Trento, Italy, 2003. [5] E. A. Hansen and S. Zilberstein. lao : A heuristic search algorithm that finds solutions with loops. Artificial Intelligence, 129(1-2):35-62, 2001. [6] H. B. McMahan, M. Likhachev, and G. J. Gordon. Bounded real-time dynamic programming: rtdp with monotone upper bounds and performance guarantees. In ICML ``05: Proceedings of the Twenty-Second International Conference on Machine learning, pages 569-576, New York, NY, USA, 2005. ACM Press. [7] R. S. Pindyck and D. L. Rubinfeld. Microeconomics. Prentice Hall, 2000. [8] G. A. Rummery and M. Niranjan. On-line Q-learning using connectionist systems. Technical report CUED/FINFENG/TR 166, Cambridge University Engineering Department, 1994. [9] S. J. Russell and A. Zimdars. Q-decomposition for reinforcement learning agents. In ICML, pages 656-663, 2003. [10] S. Singh and D. Cohn. How to dynamically merge markov decision processes. In Advances in Neural Information Processing Systems, volume 10, pages 1057-1063, Cambridge, MA, USA, 1998. MIT Press. [11] T. Smith and R. Simmons. Focused real-time dynamic programming for mdps: Squeezing more out of a heuristic. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI), Boston, USA, 2006. [12] W. Zhang. Modeling and solving a resource allocation problem with soft constraint techniques. Technical report: wucs-2002-13, Washington University, Saint-Louis, Missouri, 2002. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1219
A Q-decomposition and Bounded RTDP Approach to Resource Allocation ABSTRACT This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. The Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider. On the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search. In particular, the bounded Real-time Dynamic Programming (bounded RTDP) is used. Bounded RTDP concentrates the planning on significant states only and prunes the action space. The pruning is accomplished by proposing tight upper and lower bounds on the value function. 1. INTRODUCTION This paper aims to contribute to solve complex stochastic resource allocation problems. In general, resource allocation problems are known to be NP-Complete [12]. In such problems, a scheduling process suggests the action (i.e. resources to allocate) to undertake to accomplish certain tasks, abderrezak.benaskeur@drdc-rddc.gc.ca according to the perfectly observable state of the environment. When executing an action to realize a set of tasks, the stochastic nature of the problem induces probabilities on the next visited state. In general, the number of states is the combination of all possible specific states of each task and available resources. In this case, the number of possible actions in a state is the combination of each individual possible resource assignment to the tasks. The very high number of states and actions in this type of problem makes it very complex. There can be many types of resource allocation problems. Firstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent. A second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. To solve this problem efficiently, we adapt Qdecomposition proposed by Russell and Zimdars [9]. In our Q-decomposition approach, a planning agent manages each task and all agents have to share the limited resources. The planning process starts with the initial state s0. In s0, each agent computes their respective Q-value. Then, the planning agents are coordinated through an arbitrator to find the highest global Q-value by adding the respective possible Q-values of each agents. When implemented with heuristic search, since the number of states and actions to consider when computing the optimal policy is exponentially reduced compared to other known approaches, Q-decomposition allows to formulate the first optimal decomposed heuristic search algorithm in a stochastic environments. On the other hand, when the resources are available to all agents, no Q-decomposition is possible. A common way of addressing this large stochastic problem is by using Markov Decision Processes (MDPs), and in particular real-time search where many algorithms have been developed recently. For instance Real-Time Dynamic Programming (RTDP) [1], LRTDP [4], HDP [3], and LAO" [5] are all state-of-the-art heuristic search approaches in a stochastic environment. Because of its anytime quality, an interesting approach is RTDP introduced by Barto et al. [1] which updates states in trajectories from an initial state s0 to a goal state sg. RTDP is used in this paper to solve efficiently a constrained resource allocation problem. RTDP is much more effective if the action space can be pruned of sub-optimal actions. To do this, McMahan et al. [6], Smith and Simmons [11], and Singh and Cohn [10] proposed solving a stochastic problem using a RTDP type heuristic search with upper and lower bounds on the value of states. McMahan et al. [6] and Smith and Simmons [11] suggested, in particular, an efficient trajectory of state updates to further speed up the convergence, when given upper and lower bounds. This efficient trajectory of state updates can be combined to the approach proposed here since this paper focusses on the definition of tight bounds, and efficient state update for a constrained resource allocation problem. On the other hand, the approach by Singh and Cohn is suitable to our case, and extended in this paper using, in particular, the concept of marginal revenue [7] to elaborate tight bounds. This paper proposes new algorithms to define upper and lower bounds in the context of a RTDP heuristic search approach. Our marginal revenue bounds are compared theoretically and empirically to the bounds proposed by Singh and Cohn. Also, even if the algorithm used to obtain the optimal policy is RTDP, our bounds can be used with any other algorithm to solve an MDP. The only condition on the use of our bounds is to be in the context of stochastic constrained resource allocation. The problem is now modelled. 2. PROBLEM FORMULATION A simple resource allocation problem is one where there are the following two tasks to realize: ta1 = {wash the dishes}, and ta2 = {clean the floor}. These two tasks are either in the realized state, or not realized state. To realize the tasks, two type of resources are assumed: res1 = {brush}, and res2 = {detergent}. A computer has to compute the optimal allocation of these resources to cleaner robots to realize their tasks. In this problem, a state represents a conjunction of the particular state of each task, and the available resources. The resources may be constrained by the amount that may be used simultaneously (local constraint), and in total (global constraint). Furthermore, the higher is the number of resources allocated to realize a task, the higher is the expectation of realizing the task. For this reason, when the specific states of the tasks change, or when the number of available resources changes, the value of this state may change. When executing an action a in state s, the specific states of the tasks change stochastically, and the remaining resource are determined with the resource available in s, subtracted from the resources used by action a, if the resource is consumable. Indeed, our model may consider consumable and non-consumable resource types. A consumable resource type is one where the amount of available resource is decreased when it is used. On the other hand, a nonconsumable resource type is one where the amount of available resource is unchanged when it is used. For example, a brush is a non-consumable resource, while the detergent is a consumable resource. 2.1 Resource Allocation as a MDPs In our problem, the transition function and the reward function are both known. A Markov Decision Process (MDP) framework is used to model our stochastic resource allocation problem. MDPs have been widely adopted by researchers today to model a stochastic process. This is due to the fact that MDPs provide a well-studied and simple, yet very expressive model of the world. An MDP in the context of a resource allocation problem with limited resources is defined as a tuple (Res, Ta, S, A, P, W, R,), where: • Res = (res1,..., reslResl) is a finite set of resource types available for a planning process. Each resource type may have a local resource constraint Lres on the number that may be used in a single step, and a global resource constraint Gres on the number that may be used in total. The global constraint only applies for consumable resource types (Resc) and the local constraints always apply to consumable and nonconsumable resource types. • Ta is a finite set of tasks with ta ∈ Ta to be accomplished. • S is a finite set of states with s ∈ S. A state s is a tuple (Ta, (res1,..., reslRescl)), which is the characteristic of each unaccomplished task ta ∈ Ta in the environment, and the available consumable resources. sta is the specific state of task ta. Also, S contains a non empty set sg ⊆ S of goal states. A goal state is a sink state where an agent stays forever. • A is a finite set of actions (or assignments). The actions a ∈ A (s) applicable in a state are the combination of all resource assignments that may be executed, according to the state s. In particular, a is simply an allocation of resources to the current tasks, and ata is the resource allocation to task ta. The possible actions are limited by Lres and Gres. • Transition probabilities Pa (s ~ | s) for s ∈ S and a ∈ A (s). • W = [wta] is the relative weight (criticality) of each task. • State rewards R = [rs]: F, rsta ← Rsta × wta. The taETa relative reward of the state of a task rsta is the product of a real number Rsta by the weight factor wta. For our problem, a reward of 1 × wta is given when the state of a task (sta) is in an achieved state, and 0 in all other cases. • A discount (preference) factor - y, which is a real number between 0 and 1. A solution of an MDP is a policy π mapping states s into actions a ∈ A (s). In particular, πta (s) is the action (i.e. resources to allocate) that should be executed on task ta, considering the global state s. In this case, an optimal policy is one that maximizes the expected total reward for accomplishing all tasks. The optimal value of a state, V (s), is given by: where the remaining consumable resources in state s ~ are Resc \ res (a), where res (a) are the consumable resources used by action a. Indeed, since an action a is a resource assignment, Resc \ res (a) is the new set of available resources after the execution of action a. Furthermore, one may compute the Q-Values Q (a, s) of each state action pair using the where the optimal value of a state is V ~ (s) = max a ∈ A (s) Q (a, s). The policy is subjected to the local resource constraints res (π (s)) <LresV s E S, and V res E Res. The global constraint is defined according to all system trajectories tra E TRA. A system trajectory tra is a possible sequence of state-action pairs, until a goal state is reached under the optimal policy π. For example, state s is entered, which may transit to s ~ or to s ~ ~, according to action a. The two possible system trajectories are ((s, a), (s ~)) and ((s, a), (s ~ ~)). The global resource constraint is res (tra) <GresV tra E TRA, and V res E Resc where res (tra) is a function which returns the resources used by trajectory tra. Since the available consumable resources are represented in the state space, this condition is verified by itself. In other words, the model is Markovian as the history has not to be considered in the state space. Furthermore, the time is not considered in the model description, but it may also include a time horizon by using a finite horizon MDP. Since resource allocation in a stochastic environment is NP-Complete, heuristics should be employed. Q-decomposition which decomposes a planning problem to many agents to reduce the computational complexity associated to the state and/or action spaces is now introduced. 2.2 Q-decomposition for Resource Allocation There can be many types of resource allocation problems. Firstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent. A second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. For instance, a group of agents which manages the oil consummated by a country falls in this group. These agents desire to maximize their specific reward by consuming the right amount of oil. However, all the agents are penalized when an agent consumes oil because of the pollution it generates. Another example of this type comes from our problem of interest, explained in Section 3, which is a naval platform which must counter incoming missiles (i.e. tasks) by using its resources (i.e. weapons, movements). In some scenarios, it may happens that the missiles can be classified in two types: Those requiring a set of resources Res1 and those requiring a set of resources Res2. This can happen depending on the type of missiles, their range, and so on. In this case, two agents can plan for both set of tasks to determine the policy. However, there are interaction between the resource of Res1 and Res2, so that certain combination of resource cannot be assigned. IN particular, if an agent i allocate resources Resi to the first set of tasks Tai, and agent i ~ allocate resources Resi, to second set of tasks Tai,, the resulting policy may include actions which cannot be executed together. To result these conflicts, we use Q-decomposition proposed by Russell and Zimdars [9] in the context of reinforcement learning. The primary assumption underlying Qdecomposition is that the overall reward function R can be additively decomposed into separate rewards Ri for each distinct agent i E Ag, where IAgI is the number of agents. That is, R = Ei ∈ Ag Ri. It requires each agent to compute a value, from its perspective, for every action. To coordinate with each other, each agent i reports its action values Qi (ai, si) for each state si E Si to an arbitrator at each learning iteration. The arbitrator then chooses an action maximizing the sum of the agent Q-values for each global state s E S. The next time state s is updated, an agent i considers the value as its respective contribution, or Q-value, to the global maximal Q-value. That is, Qi (ai, si) is the value of a state E such that it maximizes maxa ∈ A (s) i ∈ Ag Qi (ai, si). The fact that the agents use a determined Q-value as the value of a state is an extension of the Sarsa on-policy algorithm [8] to Q-decomposition. Russell and Zimdars called this approach local Sarsa. In this way, an ideal compromise can be found for the agents to reach a global optimum. Indeed, rather than allowing each agent to choose the successor action, each agent i uses the action a ~ i executed by the arbitrator in the successor state s ~ i: where the remaining consumable resources in state s ~ i are Resci \ resi (ai) for a resource allocation problem. Russell and Zimdars [9] demonstrated that local Sarsa converges to the optimum. Also, in some cases, this form of agent decomposition allows the local Q-functions to be expressed by a much reduced state and action space. For our resource allocation problem described briefly in this section, Q-decomposition can be applied to generate an optimal solution. Indeed, an optimal Bellman backup can be applied in a state as in Algorithm 1. In Line 5 of the QDEC-BACKUP function, each agent managing a task computes its respective Q-value. Here, Q ~ i (a ~ i, s ~) determines the optimal Q-value of agent i in state s ~. An agent i uses as the value of a possible state transition s ~ the Q-value for this agent which determines the maximal global Q-value for state s ~ as in the original Q-decomposition approach. In brief, for each visited states s E S, each agent computes its respective Q-values with respect to the global state s. So the state space is the joint state space of all agents. Some of the gain in complexity to use Q-decomposition resides in the E Pai (s ~ iIs) part of the equation. An agent considers s, i ∈ Si as a possible state transition only the possible states of the set of tasks it manages. Since the number of states is exponential with the number of tasks, using Q-decomposition should reduce the planning time significantly. Furthermore, the action space of the agents takes into account only their available resources which is much less complex than a standard action space, which is the combination of all possible resource allocation in a state for all agents. Then, the arbitrator functionalities are in Lines 8 to 20. The global Q-value is the sum of the Q-values produced by each agent managing each task as shown in Line 11, considering the global action a. In this case, when an action of an agent i cannot be executed simultaneously with an action of another agent i ~, the global action is simply discarded from the action space A (s). Line 14 simply allocate the current value with respect to the highest global Q-value, as in a standard Bellman backup. Then, the optimal policy and Q-value of each agent is updated in Lines 16 and 17 to the sub-actions ai and specific Q-values Qi (ai, s) of each agent 1214 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) for action a. Algorithm 1 The Q-decomposition Bellman Backup. 1: Function QDEC-BACKUP (s) 2: V (s) i--0 3: for all i E Ag do 4: for all ai E Ai (s) do 5: Qi (ai, s) i--Ri (s) + γ ~ Pai (s' iIs) Q ~ i (a' i, s') s ~ iESi {where Q ~ i (a' i, s') = hi (s') when s' is not yet visited, and s' has Resci \ resi (ai) remaining consumable resources for each agent i} 6: end for 7: end for 8: for all a E A (s) do 9: Q (a, s) i--0 10: for all i E Ag do 11: Q (a, s) i--Q (a, s) + Qi (ai, s) 12: end for 13: if Q (a, s)> V (s) then 14: V (s) i--Q (a, s) 15: for all i E Ag do 16: Sri (s) i--ai 17: Q ~ i (ai, s) i--Qi (ai, s) 18: end for 19: end if 20: end for A standard Bellman backup has a complexity of o (IAI x ISAgI), where ISAgI is the number of joint states for all agents excluding the resources, and IAI is the number of joint actions. On the other hand, the Q-decomposition Bellman backup has a complexity of o ((IAgI x IAiI x ISi) I) + (IAI x IAgI)), where ISiI is the number of states for an agent i, excluding the resources and IAiI is the number of actions for an agent i. Since ISAgI is combinatorial with the number of tasks, so ISiI "ISI. Also, IAI is combinatorial with the number of resource types. If the resources are already shared among the agents, the number of resource type for each agent will usually be lower than the set of all available resource types for all agents. In these circumstances, IAiI "IAI. In a standard Bellman backup, IAI is multiplied by ISAgI, which is much more complex than multiplying IAI by IAgI with the Q-decomposition Bellman backup. Thus, the Q-decomposition Bellman backup is much less complex than a standard Bellman backup. Furthermore, the communication cost between the agents and the arbitrator is null since this approach does not consider a geographically separated problem. However, when the resources are available to all agents, no Q-decomposition is possible. In this case, Bounded RealTime Dynamic Programming (BOUNDED-RTDP) permits to focuss the search on relevant states, and to prune the action space A by using lower and higher bound on the value of states. BOUNDED-RTDP is now introduced. 2.3 Bounded-RTDP Bonet and Geffner [4] proposed LRTDP as an improvement to RTDP [1]. LRTDP is a simple dynamic programming algorithm that involves a sequence of trial runs, each starting in the initial state s0 and ending in a goal or a solved state. Each LRTDP trial is the result of simulating the policy Sr while updating the values V (s) using a Bellman backup (Equation 1) over the states s that are visited. h (s) is a heuristic which define an initial value for state s. This heuristic has to be admissible--The value given by the heuristic has to overestimate (or underestimate) the optimal value V ~ (s) when the objective function is maximized (or minimized). For example, an admissible heuristic for a stochastic shortest path problem is the solution of a deterministic shortest path problem. Indeed, since the problem is stochastic, the optimal value is lower than for the deterministic version. It has been proven that LRTDP, given an admissible initial heuristic on the value of states cannot be trapped in loops, and eventually yields optimal values [4]. The convergence is accomplished by means of a labeling procedure called CHECKSOLUED (s, E). This procedure tries to label as solved each traversed state in the current trajectory. When the initial state is labelled as solved, the algorithm has converged. In this section, a bounded version of RTDP (BOUNDEDRTDP) is presented in Algorithm 2 to prune the action space of sub-optimal actions. This pruning enables to speed up the convergence of LRTDP. BOUNDED-RTDP is similar to RTDP except there are two distinct initial heuristics for unvisited states s E S; hL (s) and hU (s). Also, the CHECKSOLUED (s, E) procedure can be omitted because the bounds can provide the labeling of a state as solved. On the one hand, hL (s) defines a lower bound on the value of s such that the optimal value of s is higher than hL (s). For its part, hU (s) defines an upper bound on the value of s such that the optimal value of s is lower than hU (s). The values of the bounds are computed in Lines 3 and 4 of the BOUNDED-BACKUP function. Computing these two Q-values is made simultaneously as the state transitions are the same for both Q-values. Only the values of the state transitions change. Thus, having to compute two Q-values instead of one does not augment the complexity of the approach. In fact, Smith and Simmons [11] state that the additional time to compute a Bellman backup for two bounds, instead of one, is no more than 10%, which is also what we obtained. In particular, L (s) is the lower bound of state s, while U (s) is the upper bound of state s. Similarly, QL (a, s) is the Q-value of the lower bound of action a in state s, while QU (a, s) is the Q-value of the upper bound of action a in state s. Using these two bounds allow significantly reducing the action space A. Indeed, in Lines 5 and 6 of the BOUNDED-BACKUP function, if QU (a, s) <L (s) then action a may be pruned from the action space of s. In Line 13 of this function, a state can be labeled as solved if the difference between the lower and upper bounds is lower than E. When the execution goes back to the BOUNDED-RTDP function, the next state in Line 10 has a fixed number of consumable resources available Resc, determined in Line 9. In brief, PICKNExTSTATE (res) selects a none-solved state s reachable under the current policy which has the highest Bellman error (IU (s)--L (s) I). Finally, in Lines 12 to 15, a backup is made in a backward fashion on all visited state of a trajectory, when this trajectory has been made. This strategy has been proven as efficient [11] [6]. As discussed by Singh and Cohn [10], this type of algorithm has a number of desirable anytime characteristics: if an action has to be picked in state s before the algorithm has converged (while multiple competitive actions remains), the action with the highest lower bound is picked. Since the upper bound for state s is known, it may be estimated The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1215 Algorithm 2 The BOUNDED-RTDP algorithm. Adapted from [4] and [10]. 1: Function BOUNDED-RTDP (S) 2: returns a value function V 3: repeat 4: s ← s0 5: visited ← null 6: repeat 7: visited.push (s) 8: BOUNDED-BACKUP (s) 9: Resc ← Resc \ {π (s)} 10: s ← s.PICKNEXTSTATE (Resc) 11: until s is a goal 5: if QU (a, s) ≤ L (s) then 6: A (s) ← A (s) \ res (a) 7: end if 8: end for 9: L (s) ← max aEA (s) QL (a, s) max QU (a, s) 10: U (s) ← aEA (s) 11: π (s) ← arg max QL (a, s) aEA (s) 12: if | U (s) − L (s) | <~ then 13: s ← solved 14: end if how far the lower bound is from the optimal. If the difference between the lower and upper bound is too high, one can choose to use another greedy algorithm of one's choice, which outputs a fast and near optimal solution. Furthermore, if a new task dynamically arrives in the environment, it can be accommodated by redefining the lower and upper bounds which exist at the time of its arrival. Singh and Cohn [10] proved that an algorithm that uses admissible lower and upper bounds to prune the action space is assured of converging to an optimal solution. The next sections describe two separate methods to define hL (s) and hU (s). First of all, the method of Singh and Cohn [10] is briefly described. Then, our own method proposes tighter bounds, thus allowing a more effective pruning of the action space. 2.4 Singh and Cohn's Bounds Singh and Cohn [10] defined lower and upper bounds to prune the action space. Their approach is pretty straightforward. First of all, a value function is computed for all tasks to realize, using a standard RTDP approach. Then, using these task-value functions, a lower bound hL, and upper bound hU can be defined. In particular, hL (s) = Vta (sta), and hU (s) = ~ Vta (sta). For readability, taETa the upper bound by Singh and Cohn is named SINGHU, and the lower bound is named SINGHL. The admissibility of these bounds has been proven by Singh and Cohn, such that, the upper bound always overestimates the optimal value of each state, while the lower bound always underestimates the optimal value of each state. To determine the optimal policy π, Singh and Cohn implemented an algorithm very similar to BOUNDED-RTDP, which uses the bounds to initialize L (s) and U (s). The only difference between BOUNDED-RTDP, and the RTDP version of Singh and Cohn is in the stopping criteria. Singh and Cohn proposed that the algorithm terminates when only one competitive action remains for each state, or when the range of all competitive actions for any state are bounded by an indifference parameter ~. BOUNDED-RTDP labels states for which | U (s) − L (s) | <~ as solved and the convergence is reached when s0 is solved or when only one competitive action remains for each state. This stopping criteria is more effective since it is similar to the one used by Smith and Simmons [11] and McMahan et al. . BRTDP [6]. In this paper, the bounds defined by Singh and Cohn and implemented using BOUNDED-RTDP define the SINGH-RTDP approach. The next sections propose to tighten the bounds of SINGH-RTDP to permit a more effective pruning of the action space. 2.5 Reducing the Upper Bound SINGHU includes actions which may not be possible to execute because of resource constraints, which overestimates the upper bound. To consider only possible actions, our upper bound, named MAXU is introduced: where Qta (ata, sta) is the Q-value of task ta for state sta, and action ata computed using a standard LRTDP approach. THEOREM 2.1. The upper bound defined by Equation 4 is admissible. Proof: The local resource constraints are satisfied because the upper bound is computed using all global possible actions a. However, hU (s) still overestimates V" (s) because the global resource constraint is not enforced. Indeed, each task may use all consumable resources for its own purpose. Doing this produces a higher value for each task, than the one obtained when planning for all tasks globally with the shared limited resources. ■ Computing the MAXU bound in a state has a complexity of O (| A | × | Ta |), and O (| Ta |) for SINGHU. A standard Bellman backup has a complexity of O (| A | × | S |). Since | A | × | Ta | ~ | A | × | S |, the computation time to determine the upper bound of a state, which is done one time for each visited state, is much less than the computation time required to compute a standard Bellman backup for a state, which is usually done many times for each visited state. Thus, the computation time of the upper bound is negligible. max taETa 1216 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.6 Increasing the Lower Bound The idea to increase SINGHL is to allocate the resources a priori among the tasks. When each task has its own set of resources, each task may be solved independently. The lower bound of state s is hL (s) = E Lowta (sta), where ta ∈ Ta Lowta (sta) is a value function for each task ta E Ta, such that the resources have been allocated a priori. The allocation a priori of the resources is made using marginal revenue, which is a highly used concept in microeconomics [7], and has recently been used for coordination of a Decentralized MDP [2]. In brief, marginal revenue is the extra revenue that an additional unit of product will bring to a firm. Thus, for a stochastic resource allocation problem, the marginal revenue of a resource is the additional expected value it involves. The marginal revenue of a resource res for a task ta in a state sta is defined as following: The concept of marginal revenue of a resource is used in Algorithm 4 to allocate the resources a priori among the tasks which enables to define the lower bound value of a state. In Line 4 of the algorithm, a value function is computed for all tasks in the environment using a standard LRTDP [4] approach. These value functions, which are also used for the upper bound, are computed considering that each task may use all available resources. The Line 5 initializes the valueta variable. This variable is the estimated value of each task ta E Ta. In the beginning of the algorithm, no resources are allocated to a specific task, thus the valueta variable is initialized to 0 for all ta E Ta. Then, in Line 9, a resource type res (consumable or non-consumable) is selected to be allocated. Here, a domain expert may separate all available resources in many types or parts to be allocated. The resources are allocated in the order of its specialization. In other words, the more a resource is efficient on a small group of tasks, the more it is allocated early. Allocating the resources in this order improves the quality of the resulting lower bound. The Line 12 computes the marginal revenue of a consumable resource res for each task ta E Ta. For a non-consumable resource, since the resource is not considered in the state space, all other reachable states from sta consider that the resource res is still usable. The approach here is to sum the difference between the real value of a state to the maximal Q-value of this state if resource res cannot be used for all states in a trajectory given by the policy of task ta. This heuristic proved to obtain good results, but other ones may be tried, for example Monte-Carlo simulation. In Line 21, the marginal revenue is updated in function of the resources already allocated to each task. R (sgta) is the reward to realize task ta. Thus, Vta (sta) − valueta is R (sgta) the residual expected value that remains to be achieved, knowing current allocation to task ta, and normalized by the reward of realizing the tasks. The marginal revenue is multiplied by this term to indicate that, the more a task has a high residual value, the more its marginal revenue is going to be high. Then, a task ta is selected in Line 23 with the highest marginal revenue, adjusted with residual value. In Line 24, the resource type res is allocated to the group of resources Resta of task ta. Afterwards, Line 29 recomAlgorithm 4 The marginal revenue lower bound algorithm. 1: Function REVENUE-bOUND (S) 2: returns a lower bound LowTa 3: for all ta E Ta do 4: Vta +--LRTDP (Sta) 5: valueta +--0 6: end for 7: s +--s0 8: repeat 9: res +--Select a resource type res E Res 10: for all ta E Ta do 11: if res is consumable then 12: mrta (sta) +--Vta (sta)--Vta (sta (Res \ res)) 30: until all resource types res E Res are assigned 31: for all ta E Ta do 32: Lowta +--LRTDP (Sta, Resta) 33: end for 34: return LowTa putes valueta. The first part of the equation to compute valueta represents the expected residual value for task ta. , which is the ratio of the efficiency of resource type res. In other words, valueta is assigned to valueta + (the residual value x the value ratio of resource type res). For a consumable resource, the Q-value consider only resource res in the state space, while for a non-consumable resource, no resources are available. All resource types are allocated in this manner until Res is empty. All consumable and non-consumable resource types are allocated to each task. When all resources are allocated, the lower bound components Lowta of each task are computed in Line 32. When the global solution is computed, the lower bound is as follow: We use the maximum of the SINGHL bound and the sum of the lower bound components Lowta, thus MARGINALREVENUE> SINGHL. In particular, the SINGHL bound may The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1217 be higher when a little number of tasks remain. As the components Lowta are computed considering s0; for example, if in a subsequent state only one task remains, the bound of SINGHL will be higher than any of the Lowta components. The main difference of complexity between SINGHL and REVENUE-BOUND is in Line 32 where a value for each task has to be computed with the shared resource. However, since the resource are shared, the state space and action space is greatly reduced for each task, reducing greatly the calculus compared to the value functions computed in Line 4 which is done for both SINGHL and REVENUE-BOUND. THEOREM 2.2. The lower bound of Equation 6 is admissible. Proof: Lowta (sta) is computed with the resource being shared. Summing the Lowta (sta) value functions for each ta ∈ Ta does not violates the local and global resource constraints. Indeed, as the resources are shared, the tasks cannot overuse them. Thus, hL (s) is a realizable policy, and an admissible lower bound. ■ 3. DISCUSSION AND EXPERIMENTS The domain of the experiments is a naval platform which must counter incoming missiles (i.e. tasks) by using its resources (i.e. weapons, movements). For the experiments, 100 randomly resource allocation problems were generated for each approach, and possible number of tasks. In our problem, | Sta | = 4, thus each task can be in four distinct states. There are two types of states; firstly, states where actions modify the transition probabilities; and then, there are goal states. The state transitions are all stochastic because when a missile is in a given state, it may always transit in many possible states. In particular, each resource type has a probability to counter a missile between 45% and 65% depending on the state of the task. When a missile is not countered, it transits to another state, which may be preferred or not to the current state, where the most preferred state for a task is when it is countered. The effectiveness of each resource is modified randomly by ± 15% at the start of a scenario. There are also local and global resource constraints on the amount that may be used. For the local constraints, at most 1 resource of each type can be allocated to execute tasks in a specific state. This constraint is also present on a real naval platform because of sensor and launcher constraints and engagement policies. Furthermore, for consumable resources, the total amount of available consumable resource is between 1 and 2 for each type. The global constraint is generated randomly at the start of a scenario for each consumable resource type. The number of resource type has been fixed to 5, where there are 3 consumable resource types and 2 non-consumable resources types. For this problem a standard LRTDP approach has been implemented. A simple heuristic has been used where the value of an unvisited state is assigned as the value of a goal state such that all tasks are achieved. This way, the value of each unvisited state is assured to overestimate its real value since the value of achieving a task ta is the highest the planner may get for ta. Since this heuristic is pretty straightforward, the advantages of using better heuristics are more evident. Nevertheless, even if the LRTDP approach uses a simple heuristic, still a huge part of the state space is not visited when computing the optimal policy. The approaches described in this paper are compared in Figures 1 and 2. Lets summarize these approaches here: • QDEC-LRTDP: The backups are computed using the QDEC-BACKUP function (Algorithm 1), but in a LRTDP context. In particular the updates made in the CHECKSOLVED function are also made using the the QDECBACKUP function. • LRTDP-UP: The upper bound of MAXU is used for LRTDP. • SINGH-RTDP: The SINGHL and SINGHU bounds are used for BOUNDED-RTDP. • MR-RTDP: The REVENUE-BOUND and MAXU bounds are used for BOUNDED-RTDP. To implement QDEC-LRTDP, we divided the set of tasks in two equal parts. The set of task Tai, managed by agent i, can be accomplished with the set of resources Resi, while the second set of task Tai,, managed by agent Agi,, can be accomplished with the set of resources Resi,. Resi had one consumable resource type and one non-consumable resource type, while Resi, had two consumable resource types and one non-consumable resource type. When the number of tasks is odd, one more task was assigned to Tai,. There are constraint between the group of resource Resi and Resi, such that some assignments are not possible. These constraints are managed by the arbitrator as described in Section 2.2. Q-decomposition permits to diminish the planning time significantly in our problem settings, and seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent. To compute the lower bound of REVENUE-BOUND, all available resources have to be separated in many types or parts to be allocated. For our problem, we allocated each resource of each type in the order of of its specialization like we said when describing the REVENUE-BOUND function. In terms of experiments, notice that the LRTDP LRTDP-UP and approaches for resource allocation, which doe not prune the action space, are much more complex. For instance, it took an average of 1512 seconds to plan for the LRTDP-UP approach with six tasks (see Figure 1). The SINGH-RTDP approach diminished the planning time by using a lower and upper bound to prune the action space. MR-RTDP further reduce the planning time by providing very tight initial bounds. In particular, SINGH-RTDP needed 231 seconds in average to solve problem with six tasks and MR-RTDP required 76 seconds. Indeed, the time reduction is quite significant compared to LRTDP-UP, which demonstrates the efficiency of using bounds to prune the action space. Furthermore, we implemented MR-RTDP with the SINGHU bound, and this was slightly less efficient than with the MAXU bound. We also implemented MR-RTDP with the SINGHL bound, and this was slightly more efficient than SINGH-RTDP. From these results, we conclude that the difference of efficiency between MR-RTDP and SINGH-RTDP is more attributable to the MARGINAL-REVENUE lower bound that to the MAXU upper bound. Indeed, when the number of task to execute is high, the lower bounds by SINGH-RTDP takes the values of a single task. On the other hand, the lower bound of MR-RTDP takes into account the value of all 1218 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Efficiency of Q-decomposition LRTDP and LRTDP. task by using a heuristic to distribute the resources. Indeed, an optimal allocation is one where the resources are distributed in the best way to all tasks, and our lower bound heuristically does that. 4. CONCLUSION The experiments have shown that Q-decomposition seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent. On the other hand, when the available resource are shared, no Q-decomposition is possible and we proposed tight bounds for heuristic search. In this case, the planning time of BOUNDED-RTDP, which prunes the action space, is significantly lower than for LRTDP. Furthermore, The marginal revenue bound proposed in this paper compares favorably to the Singh and Cohn [10] approach. BOUNDEDRTDP with our proposed bounds may apply to a wide range of stochastic environments. The only condition for the use our bounds is that each task possesses consumable and/or non-consumable limited resources. An interesting research avenue would be to experiment our bounds with other heuristic search algorithms. For instance, FRTDP [11], and BRTDP [6] are both efficient heuristic search algorithms. In particular, both these approaches proposed an efficient state trajectory updates, when given upper and lower bounds. Our tight bounds would enable, for both FRTDP and BRTDP, to reduce the number of backup to perform before convergence. Finally, the BOUNDED-RTDP function prunes the action space when QU (a, s) ≤ L (s), as Singh and Cohn [10] suggested. FRTDP and BRTDP could also prune the action space in these circumstances to further reduce their planning time.
A Q-decomposition and Bounded RTDP Approach to Resource Allocation ABSTRACT This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. The Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider. On the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search. In particular, the bounded Real-time Dynamic Programming (bounded RTDP) is used. Bounded RTDP concentrates the planning on significant states only and prunes the action space. The pruning is accomplished by proposing tight upper and lower bounds on the value function. 1. INTRODUCTION This paper aims to contribute to solve complex stochastic resource allocation problems. In general, resource allocation problems are known to be NP-Complete [12]. In such problems, a scheduling process suggests the action (i.e. resources to allocate) to undertake to accomplish certain tasks, abderrezak.benaskeur@drdc-rddc.gc.ca according to the perfectly observable state of the environment. When executing an action to realize a set of tasks, the stochastic nature of the problem induces probabilities on the next visited state. In general, the number of states is the combination of all possible specific states of each task and available resources. In this case, the number of possible actions in a state is the combination of each individual possible resource assignment to the tasks. The very high number of states and actions in this type of problem makes it very complex. There can be many types of resource allocation problems. Firstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent. A second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. To solve this problem efficiently, we adapt Qdecomposition proposed by Russell and Zimdars [9]. In our Q-decomposition approach, a planning agent manages each task and all agents have to share the limited resources. The planning process starts with the initial state s0. In s0, each agent computes their respective Q-value. Then, the planning agents are coordinated through an arbitrator to find the highest global Q-value by adding the respective possible Q-values of each agents. When implemented with heuristic search, since the number of states and actions to consider when computing the optimal policy is exponentially reduced compared to other known approaches, Q-decomposition allows to formulate the first optimal decomposed heuristic search algorithm in a stochastic environments. On the other hand, when the resources are available to all agents, no Q-decomposition is possible. A common way of addressing this large stochastic problem is by using Markov Decision Processes (MDPs), and in particular real-time search where many algorithms have been developed recently. For instance Real-Time Dynamic Programming (RTDP) [1], LRTDP [4], HDP [3], and LAO" [5] are all state-of-the-art heuristic search approaches in a stochastic environment. Because of its anytime quality, an interesting approach is RTDP introduced by Barto et al. [1] which updates states in trajectories from an initial state s0 to a goal state sg. RTDP is used in this paper to solve efficiently a constrained resource allocation problem. RTDP is much more effective if the action space can be pruned of sub-optimal actions. To do this, McMahan et al. [6], Smith and Simmons [11], and Singh and Cohn [10] proposed solving a stochastic problem using a RTDP type heuristic search with upper and lower bounds on the value of states. McMahan et al. [6] and Smith and Simmons [11] suggested, in particular, an efficient trajectory of state updates to further speed up the convergence, when given upper and lower bounds. This efficient trajectory of state updates can be combined to the approach proposed here since this paper focusses on the definition of tight bounds, and efficient state update for a constrained resource allocation problem. On the other hand, the approach by Singh and Cohn is suitable to our case, and extended in this paper using, in particular, the concept of marginal revenue [7] to elaborate tight bounds. This paper proposes new algorithms to define upper and lower bounds in the context of a RTDP heuristic search approach. Our marginal revenue bounds are compared theoretically and empirically to the bounds proposed by Singh and Cohn. Also, even if the algorithm used to obtain the optimal policy is RTDP, our bounds can be used with any other algorithm to solve an MDP. The only condition on the use of our bounds is to be in the context of stochastic constrained resource allocation. The problem is now modelled. 2. PROBLEM FORMULATION 2.1 Resource Allocation as a MDPs 2.2 Q-decomposition for Resource Allocation 1214 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.3 Bounded-RTDP The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1215 2.4 Singh and Cohn's Bounds 2.5 Reducing the Upper Bound 1216 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.6 Increasing the Lower Bound The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1217 3. DISCUSSION AND EXPERIMENTS 1218 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. CONCLUSION The experiments have shown that Q-decomposition seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent. On the other hand, when the available resource are shared, no Q-decomposition is possible and we proposed tight bounds for heuristic search. In this case, the planning time of BOUNDED-RTDP, which prunes the action space, is significantly lower than for LRTDP. Furthermore, The marginal revenue bound proposed in this paper compares favorably to the Singh and Cohn [10] approach. BOUNDEDRTDP with our proposed bounds may apply to a wide range of stochastic environments. The only condition for the use our bounds is that each task possesses consumable and/or non-consumable limited resources. An interesting research avenue would be to experiment our bounds with other heuristic search algorithms. For instance, FRTDP [11], and BRTDP [6] are both efficient heuristic search algorithms. In particular, both these approaches proposed an efficient state trajectory updates, when given upper and lower bounds. Our tight bounds would enable, for both FRTDP and BRTDP, to reduce the number of backup to perform before convergence. Finally, the BOUNDED-RTDP function prunes the action space when QU (a, s) ≤ L (s), as Singh and Cohn [10] suggested. FRTDP and BRTDP could also prune the action space in these circumstances to further reduce their planning time.
A Q-decomposition and Bounded RTDP Approach to Resource Allocation ABSTRACT This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. The Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider. On the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search. In particular, the bounded Real-time Dynamic Programming (bounded RTDP) is used. Bounded RTDP concentrates the planning on significant states only and prunes the action space. The pruning is accomplished by proposing tight upper and lower bounds on the value function. 1. INTRODUCTION This paper aims to contribute to solve complex stochastic resource allocation problems. In general, resource allocation problems are known to be NP-Complete [12]. In such problems, a scheduling process suggests the action (i.e. resources to allocate) to undertake to accomplish certain tasks, abderrezak.benaskeur@drdc-rddc.gc.ca according to the perfectly observable state of the environment. When executing an action to realize a set of tasks, the stochastic nature of the problem induces probabilities on the next visited state. In general, the number of states is the combination of all possible specific states of each task and available resources. In this case, the number of possible actions in a state is the combination of each individual possible resource assignment to the tasks. The very high number of states and actions in this type of problem makes it very complex. There can be many types of resource allocation problems. Firstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent. A second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. To solve this problem efficiently, we adapt Qdecomposition proposed by Russell and Zimdars [9]. In our Q-decomposition approach, a planning agent manages each task and all agents have to share the limited resources. The planning process starts with the initial state s0. In s0, each agent computes their respective Q-value. On the other hand, when the resources are available to all agents, no Q-decomposition is possible. RTDP is used in this paper to solve efficiently a constrained resource allocation problem. RTDP is much more effective if the action space can be pruned of sub-optimal actions. al. [6], Smith and Simmons [11], and Singh and Cohn [10] proposed solving a stochastic problem using a RTDP type heuristic search with upper and lower bounds on the value of states. McMahan et al. [6] and Smith and Simmons [11] suggested, in particular, an efficient trajectory of state updates to further speed up the convergence, when given upper and lower bounds. This efficient trajectory of state updates can be combined to the approach proposed here since this paper focusses on the definition of tight bounds, and efficient state update for a constrained resource allocation problem. This paper proposes new algorithms to define upper and lower bounds in the context of a RTDP heuristic search approach. Our marginal revenue bounds are compared theoretically and empirically to the bounds proposed by Singh and Cohn. Also, even if the algorithm used to obtain the optimal policy is RTDP, our bounds can be used with any other algorithm to solve an MDP. The only condition on the use of our bounds is to be in the context of stochastic constrained resource allocation. The problem is now modelled. 4. CONCLUSION The experiments have shown that Q-decomposition seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent. On the other hand, when the available resource are shared, no Q-decomposition is possible and we proposed tight bounds for heuristic search. In this case, the planning time of BOUNDED-RTDP, which prunes the action space, is significantly lower than for LRTDP. Furthermore, The marginal revenue bound proposed in this paper compares favorably to the Singh and Cohn [10] approach. BOUNDEDRTDP with our proposed bounds may apply to a wide range of stochastic environments. The only condition for the use our bounds is that each task possesses consumable and/or non-consumable limited resources. An interesting research avenue would be to experiment our bounds with other heuristic search algorithms. For instance, FRTDP [11], and BRTDP [6] are both efficient heuristic search algorithms. In particular, both these approaches proposed an efficient state trajectory updates, when given upper and lower bounds. Our tight bounds would enable, for both FRTDP and BRTDP, to reduce the number of backup to perform before convergence. Finally, the BOUNDED-RTDP function prunes the action space when QU (a, s) ≤ L (s), as Singh and Cohn [10] suggested. FRTDP and BRTDP could also prune the action space in these circumstances to further reduce their planning time.
I-63
Combinatorial Resource Scheduling for Multiagent MDPs
Optimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive. We consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs). In recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs. However, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs. We extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods. We provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time. We illustrate and empirically analyze the method in the context of a stochastic job-scheduling domain.
[ "combinatori resourc schedul", "resourc", "schedul", "optim resourc schedul", "multiag system", "markov decis process", "resourc alloc", "optim problem", "util function", "optim alloc", "discret-time schedul problem", "resourc-schedul algorithm", "resourc-schedul", "task and resourc alloc in agent system", "multiag plan" ]
[ "P", "P", "P", "P", "P", "P", "M", "R", "M", "M", "M", "M", "U", "M", "M" ]
Combinatorial Resource Scheduling for Multiagent MDPs Dmitri A. Dolgov, Michael R. James, and Michael E. Samples AI and Robotics Group Technical Research, Toyota Technical Center USA {ddolgov, michael.r.james, michael.samples}@gmail. com ABSTRACT Optimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive. We consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs). In recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs. However, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs. We extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods. We provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time. We illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain. Categories and Subject Descriptors I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems General Terms Algorithms, Performance, Design 1. INTRODUCTION The tasks of optimal resource allocation and scheduling are ubiquitous in multiagent systems, but solving such optimization problems can be computationally difficult, due to a number of factors. In particular, when the value of a set of resources to an agent is not additive (as is often the case with resources that are substitutes or complements), the utility function might have to be defined on an exponentially large space of resource bundles, which very quickly becomes computationally intractable. Further, even when each agent has a utility function that is nonzero only on a small subset of the possible resource bundles, obtaining optimal allocation is still computationally prohibitive, as the problem becomes NP-complete [14]. Such computational issues have recently spawned several threads of work in using compact models of agents'' preferences. One idea is to use any structure present in utility functions to represent them compactly, via, for example, logical formulas [15, 10, 4, 3]. An alternative is to directly model the mechanisms that define the agents'' utility functions and perform resource allocation directly with these models [9]. A way of accomplishing this is to model the processes by which an agent might utilize the resources and define the utility function as the payoff of these processes. In particular, if an agent uses resources to act in a stochastic environment, its utility function can be naturally modeled with a Markov decision process, whose action set is parameterized by the available resources. This representation can then be used to construct very efficient resource-allocation algorithms that lead to an exponential speedup over a straightforward optimization problem with flat representations of combinatorial preferences [6, 7, 8]. However, this existing work on resource allocation with preferences induced by resource-parameterized MDPs makes an assumption that the resources are only allocated once and are then utilized by the agents independently within their infinite-horizon MDPs. This assumption that no reallocation of resources is possible can be limiting in domains where agents arrive and depart dynamically. In this paper, we extend the work on resource allocation under MDP-induced preferences to discrete-time scheduling problems, where agents are present in the system for finite time intervals and can only use resources within these intervals. In particular, agents arrive and depart at arbitrary (predefined) times and within these intervals use resources to execute tasks in finite-horizon MDPs. We address the problem of globally optimal resource scheduling, where the objective is to find an allocation of resources to the agents across time that maximizes the sum of the expected rewards that they obtain. In this context, our main contribution is a mixed-integerprogramming formulation of the scheduling problem that chooses globally optimal resource assignments, starting times, and execution horizons for all agents (within their arrival1220 978-81-904262-7-5 (RPS) c 2007 IFAAMAS departure intervals). We analyze and empirically compare two flavors of the scheduling problem: one, where agents have static resource assignments within their finite-horizon MDPs, and another, where resources can be dynamically reallocated between agents at every time step. In the rest of the paper, we first lay down the necessary groundwork in Section 2 and then introduce our model and formal problem statement in Section 3. In Section 4.2, we describe our main result, the optimization program for globally optimal resource scheduling. Following the discussion of our experimental results on a job-scheduling problem in Section 5, we conclude in Section 6 with a discussion of possible extensions and generalizations of our method. 2. BACKGROUND Similarly to the model used in previous work on resourceallocation with MDP-induced preferences [6, 7], we define the value of a set of resources to an agent as the value of the best MDP policy that is realizable, given those resources. However, since the focus of our work is on scheduling problems, and a large part of the optimization problem is to decide how resources are allocated in time among agents with finite arrival and departure times, we model the agents'' planning problems as finite-horizon MDPs, in contrast to previous work that used infinite-horizon discounted MDPs. In the rest of this section, we first introduce some necessary background on finite-horizon MDPs and present a linear-programming formulation that serves as the basis for our solution algorithm developed in Section 4. We also outline the standard methods for combinatorial resource scheduling with flat resource values, which serve as a comparison benchmark for the new model developed here. 2.1 Markov Decision Processes A stationary, finite-domain, discrete-time MDP (see, for example, [13] for a thorough and detailed development) can be described as S, A, p, r , where: S is a finite set of system states; A is a finite set of actions that are available to the agent; p is a stationary stochastic transition function, where p(σ|s, a) is the probability of transitioning to state σ upon executing action a in state s; r is a stationary reward function, where r(s, a) specifies the reward obtained upon executing action a in state s. Given such an MDP, a decision problem under a finite horizon T is to choose an optimal action at every time step to maximize the expected value of the total reward accrued during the agent``s (finite) lifetime. The agent``s optimal policy is then a function of current state s and the time until the horizon. An optimal policy for such a problem is to act greedily with respect to the optimal value function, defined recursively by the following system of finite-time Bellman equations [2]: v(s, t) = max a r(s, a) + X σ p(σ|s, a)v(σ, t + 1), ∀s ∈ S, t ∈ [1, T − 1]; v(s, T) = 0, ∀s ∈ S; where v(s, t) is the optimal value of being in state s at time t ∈ [1, T]. This optimal value function can be easily computed using dynamic programming, leading to the following optimal policy π, where π(s, a, t) is the probability of executing action a in state s at time t: π(s, a, t) = ( 1, a = argmaxa r(s, a) + P σ p(σ|s, a)v(σ, t + 1), 0, otherwise. The above is the most common way of computing the optimal value function (and therefore an optimal policy) for a finite-horizon MDP. However, we can also formulate the problem as the following linear program (similarly to the dual LP for infinite-horizon discounted MDPs [13, 6, 7]): max X s X a r(s, a) X t x(s, a, t) subject to: X a x(σ, a, t + 1) = X s,a p(σ|s, a)x(s, a, t) ∀σ, t ∈ [1, T − 1]; X a x(s, a, 1) = α(s), ∀s ∈ S; (1) where α(s) is the initial distribution over the state space, and x is the (non-stationary) occupation measure (x(s, a, t) ∈ [0, 1] is the total expected number of times action a is executed in state s at time t). An optimal (non-stationary) policy is obtained from the occupation measure as follows: π(s, a, t) = x(s, a, t)/ X a x(s, a, t) ∀s ∈ S, t ∈ [1, T]. (2) Note that the standard unconstrained finite-horizon MDP, as described above, always has a uniformly-optimal solution (optimal for any initial distribution α(s)). Therefore, an optimal policy can be obtained by using an arbitrary constant α(s) > 0 (in particular, α(s) = 1 will result in x(s, a, t) = π(s, a, t)). However, for MDPs with resource constraints (as defined below in Section 3), uniformly-optimal policies do not in general exist. In such cases, α becomes a part of the problem input, and a resulting policy is only optimal for that particular α. This result is well known for infinite-horizon MDPs with various types of constraints [1, 6], and it also holds for our finite-horizon model, which can be easily established via a line of reasoning completely analogous to the arguments in [6]. 2.2 Combinatorial Resource Scheduling A straightforward approach to resource scheduling for a set of agents M, whose values for the resources are induced by stochastic planning problems (in our case, finite-horizon MDPs) would be to have each agent enumerate all possible resource assignments over time and, for each one, compute its value by solving the corresponding MDP. Then, each agent would provide valuations for each possible resource bundle over time to a centralized coordinator, who would compute the optimal resource assignments across time based on these valuations. When resources can be allocated at different times to different agents, each agent must submit valuations for every combination of possible time horizons. Let each agent m ∈ M execute its MDP within the arrival-departure time interval τ ∈ [τa m, τd m]. Hence, agent m will execute an MDP with time horizon no greater than Tm = τd m−τa m+1. Let bτ be the global time horizon for the problem, before which all of the agents'' MDPs must finish. We assume τd m < bτ, ∀m ∈ M. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1221 For the scheduling problem where agents have static resource requirements within their finite-horizon MDPs, the agents provide a valuation for each resource bundle for each possible time horizon (from [1, Tm]) that they may use. Let Ω be the set of resources to be allocated among the agents. An agent will get at most one resource bundle for one of the time horizons. Let the variable ψ ∈ Ψm enumerate all possible pairs of resource bundles and time horizons for agent m, so there are 2|Ω| × Tm values for ψ (the space of bundles is exponential in the number of resource types |Ω|). The agent m must provide a value vψ m for each ψ, and the coordinator will allocate at most one ψ (resource, time horizon) pair to each agent. This allocation is expressed as an indicator variable zψ m ∈ {0, 1} that shows whether ψ is assigned to agent m. For time τ and resource ω, the function nm(ψ, τ, ω) ∈ {0, 1} indicates whether the bundle in ψ uses resource ω at time τ (we make the assumption that agents have binary resource requirements). This allocation problem is NP-complete, even when considering only a single time step, and its difficulty increases significantly with multiple time steps because of the increasing number of values of ψ. The problem of finding an optimal allocation that satisfies the global constraint that the amount of each resource ω allocated to all agents does not exceed the available amount bϕ(ω) can be expressed as the following integer program: max X m∈M X ψ∈Ψm zψ mvψ m subject to: X ψ∈Ψm zψ m ≤ 1, ∀m ∈ M; X m∈M X ψ∈Ψm zψ mnm(ψ, τ, ω) ≤ bϕ(ω), ∀τ ∈ [1, bτ], ∀ω ∈ Ω; (3) The first constraint in equation 3 says that no agent can receive more than one bundle, and the second constraint ensures that the total assignment of resource ω does not, at any time, exceed the resource bound. For the scheduling problem where the agents are able to dynamically reallocate resources, each agent must specify a value for every combination of bundles and time steps within its time horizon. Let the variable ψ ∈ Ψm in this case enumerate all possible resource bundles for which at most one bundle may be assigned to agent m at each time step. Therefore, in this case there are P t∈[1,Tm](2|Ω| )t ∼ 2|Ω|Tm possibilities of resource bundles assigned to different time slots, for the Tm different time horizons. The same set of equations (3) can be used to solve this dynamic scheduling problem, but the integer program is different because of the difference in how ψ is defined. In this case, the number of ψ values is exponential in each agent``s planning horizon Tm, resulting in a much larger program. This straightforward approach to solving both of these scheduling problems requires an enumeration and solution of either 2|Ω| Tm (static allocation) or P t∈[1,Tm] 2|Ω|t (dynamic reallocation) MDPs for each agent, which very quickly becomes intractable with the growth of the number of resources |Ω| or the time horizon Tm. 3. MODEL AND PROBLEM STATEMENT We now formally introduce our model of the resourcescheduling problem. The problem input consists of the following components: • M, Ω, bϕ, τa m, τd m, bτ are as defined above in Section 2.2. • {Θm} = {S, A, pm, rm, αm} are the MDPs of all agents m ∈ M. Without loss of generality, we assume that state and action spaces of all agents are the same, but each has its own transition function pm, reward function rm, and initial conditions αm. • ϕm : A×Ω → {0, 1} is the mapping of actions to resources for agent m. ϕm(a, ω) indicates whether action a of agent m needs resource ω. An agent m that receives a set of resources that does not include resource ω cannot execute in its MDP policy any action a for which ϕm(a, ω) = 0. We assume all resource requirements are binary; as discussed below in Section 6, this assumption is not limiting. Given the above input, the optimization problem we consider is to find the globally optimal-maximizing the sum of expected rewards-mapping of resources to agents for all time steps: Δ : τ × M × Ω → {0, 1}. A solution is feasible if the corresponding assignment of resources to the agents does not violate the global resource constraint: X m Δm(τ, ω) ≤ bϕ(ω), ∀ω ∈ Ω, τ ∈ [1, bτ]. (4) We consider two flavors of the resource-scheduling problem. The first formulation restricts resource assignments to the space where the allocation of resources to each agent is static during the agent``s lifetime. The second formulation allows reassignment of resources between agents at every time step within their lifetimes. Figure 1 depicts a resource-scheduling problem with three agents M = {m1, m2, m3}, three resources Ω = {ω1, ω2, ω3}, and a global problem horizon of bτ = 11. The agents'' arrival and departure times are shown as gray boxes and are {1, 6}, {3, 7}, and {2, 11}, respectively. A solution to this problem is shown via horizontal bars within each agents'' box, where the bars correspond to the allocation of the three resource types. Figure 1a shows a solution to a static scheduling problem. According to the shown solution, agent m1 begins the execution of its MDP at time τ = 1 and has a lock on all three resources until it finishes execution at time τ = 3. Note that agent m1 relinquishes its hold on the resources before its announced departure time of τd m1 = 6, ostensibly because other agents can utilize the resources more effectively. Thus, at time τ = 4, resources ω1 and ω3 are allocated to agent m2, who then uses them to execute its MDP (using only actions supported by resources ω1 and ω3) until time τ = 7. Agent m3 holds resource ω3 during the interval τ ∈ [4, 10]. Figure 1b shows a possible solution to the dynamic version of the same problem. There, resources can be reallocated between agents at every time step. For example, agent m1 gives up its use of resource ω2 at time τ = 2, although it continues the execution of its MDP until time τ = 6. Notice that an agent is not allowed to stop and restart its MDP, so agent m1 is only able to continue executing in the interval τ ∈ [3, 4] if it has actions that do not require any resources (ϕm(a, ω) = 0). Clearly, the model and problem statement described above make a number of assumptions about the problem and the desired solution properties. We discuss some of those assumptions and their implications in Section 6. 1222 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) (a) (b) Figure 1: Illustration of a solution to a resource-scheduling problem with three agents and three resources: a) static resource assignments (resource assignments are constant within agents'' lifetimes; b) dynamic assignment (resource assignments are allowed to change at every time step). 4. RESOURCE SCHEDULING Our resource-scheduling algorithm proceeds in two stages. First, we perform a preprocessing step that augments the agent MDPs; this process is described in Section 4.1. Second, using these augmented MDPs we construct a global optimization problem, which is described in Section 4.2. 4.1 Augmenting Agents'' MDPs In the model described in the previous section, we assume that if an agent does not possess the necessary resources to perform actions in its MDP, its execution is halted and the agent leaves the system. In other words, the MDPs cannot be paused and resumed. For example, in the problem shown in Figure 1a, agent m1 releases all resources after time τ = 3, at which point the execution of its MDP is halted. Similarly, agents m2 and m3 only execute their MDPs in the intervals τ ∈ [4, 6] and τ ∈ [4, 10], respectively. Therefore, an important part of the global decision-making problem is to decide the window of time during which each of the agents is active (i.e., executing its MDP). To accomplish this, we augment each agent``s MDP with two new states (start and finish states sb , sf , respectively) and a new start/stop action a∗ , as illustrated in Figure 2. The idea is that an agent stays in the start state sb until it is ready to execute its MDP, at which point it performs the start/stop action a∗ and transitions into the state space of the original MDP with the transition probability that corresponds to the original initial distribution α(s). For example, in Figure 1a, for agent m2 this would happen at time τ = 4. Once the agent gets to the end of its activity window (time τ = 6 for agent m2 in Figure 1a), it performs the start/stop action, which takes it into the sink finish state sf at time τ = 7. More precisely, given an MDP S, A, pm, rm, αm , we define an augmented MDP S , A , pm, rm, αm as follows: S = S ∪ sb ∪ sf ; A = A ∪ a∗ ; p (s|sb , a∗ ) = α(s), ∀s ∈ S; p (sb |sb , a) = 1.0, ∀a ∈ A; p (sf |s, a∗ ) = 1.0, ∀s ∈ S; p (σ|s, a) = p(σ|s, a), ∀s, σ ∈ S, a ∈ A; r (sb , a) = r (sf , a) = 0, ∀a ∈ A ; r (s, a) = r(s, a), ∀s ∈ S, a ∈ A; α (sb ) = 1; α (s) = 0, ∀s ∈ S; where all non-specified transition probabilities are assumed to be zero. Further, in order to account for the new starting state, we begin the MDP one time-step earlier, setting τa m ← τa m − 1. This will not affect the resource allocation due to the resource constraints only being enforced for the original MDP states, as will be discussed in the next section. For example, the augmented MDPs shown in Figure 2b (which starts in state sb at time τ = 2) would be constructed from an MDP with original arrival time τ = 3. Figure 2b also shows a sample trajectory through the state space: the agent starts in state sb , transitions into the state space S of the original MDP, and finally exists into the sink state sf . Note that if we wanted to model a problem where agents could pause their MDPs at arbitrary time steps (which might be useful for domains where dynamic reallocation is possible), we could easily accomplish this by including an extra action that transitions from each state to itself with zero reward. 4.2 MILP for Resource Scheduling Given a set of augmented MDPs, as defined above, the goal of this section is to formulate a global optimization program that solves the resource-scheduling problem. In this section and below, all MDPs are assumed to be the augmented MDPs as defined in Section 4.1. Our approach is similar to the idea used in [6]: we begin with the linear-program formulation of agents'' MDPs (1) and augment it with constraints that ensure that the corresponding resource allocation across agents and time is valid. The resulting optimization problem then simultaneously solves the agents'' MDPs and resource-scheduling problems. In the rest of this section, we incrementally develop a mixed integer program (MILP) that achieves this. In the absence of resource constraints, the agents'' finitehorizon MDPs are completely independent, and the globally optimal solution can be trivially obtained via the following LP, which is simply an aggregation of single-agent finitehorizon LPs: max X m X s X a rm(s, a) X t xm(s, a, t) subject to: X a xm(σ, a, t + 1) = X s,a pm(σ|s, a)xm(s, a, t), ∀m ∈ M, σ ∈ S, t ∈ [1, Tm − 1]; X a xm(s, a, 1) = αm(s), ∀m ∈ M, s ∈ S; (12) where xm(s, a, t) is the occupation measure of agent m, and The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1223 (a) (b) Figure 2: Illustration of augmenting an MDP to allow for variable starting and stopping times: a) (left) the original two-state MDP with a single action; (right) the augmented MDP with new states sb and sf and the new action a∗ (note that the origianl transitions are not changed in the augmentation process); b) the augmented MDP displayed as a trajectory through time (grey lines indicate all transitions, while black lines indicate a given trajectory. Objective Function (sum of expected rewards over all agents) max X m X s X a rm(s, a) X t xm(s, a, t) (5) Meaning Implication Linear Constraints Tie x to θ. Agent is only active when occupation measure is nonzero in original MDP states. θm(τ) = 0 =⇒ xm(s, a, τ −τa m+1) = 0 ∀s /∈ {sb , sf }, a ∈ A X s/∈{sb,sf } X a xm(s, a, t) ≤ θm(τa m + t − 1) ∀m ∈ M, ∀t ∈ [1, Tm] (6) Agent can only be active in τ ∈ (τa m, τd m) θm(τ) = 0 ∀m ∈ M, τ /∈ (τa m, τd m) (7) Cannot use resources when not active θm(τ) = 0 =⇒ Δm(τ, ω) = 0 ∀τ ∈ [0, bτ], ω ∈ Ω Δm(τ, ω) ≤ θm(τ) ∀m ∈ M, τ ∈ [0, bτ], ω ∈ Ω (8) Tie x to Δ (nonzero x forces corresponding Δ to be nonzero.) Δm(τ, ω) = 0, ϕm(a, ω) = 1 =⇒ xm(s, a, τ − τa m + 1) = 0 ∀s /∈ {sb , sf } 1/|A| X a ϕm(a, ω) X s/∈{sb,sf } xm(s, a, t) ≤ Δm(t + τa m − 1, ω) ∀m ∈ M, ω ∈ Ω, t ∈ [1, Tm] (9) Resource bounds X m Δm(τ, ω) ≤ bϕ(ω) ∀ω ∈ Ω, τ ∈ [0, bτ] (10) Agent cannot change resources while active. Only enabled for scheduling with static assignments. θm(τ) = 1 and θm(τ + 1) = 1 =⇒ Δm(τ, ω) = Δm(τ + 1, ω) Δm(τ, ω) − Z(1 − θm(τ + 1)) ≤ Δm(τ + 1, ω) + Z(1 − θm(τ)) Δm(τ, ω) + Z(1 − θm(τ + 1)) ≥ Δm(τ + 1, ω) − Z(1 − θm(τ)) ∀m ∈ M, ω ∈ Ω, τ ∈ [0, bτ] (11) Table 1: MILP for globally optimal resource scheduling. Tm = τd m − τa m + 1 is the time horizon for the agent``s MDP. Using this LP as a basis, we augment it with constraints that ensure that the resource usage implied by the agents'' occupation measures {xm} does not violate the global resource requirements bϕ at any time step τ ∈ [0, bτ]. To formulate these resource constraints, we use the following binary variables: • Δm(τ, ω) = {0, 1}, ∀m ∈ M, τ ∈ [0, bτ], ω ∈ Ω, which serve as indicator variables that define whether agent m possesses resource ω at time τ. These are analogous to the static indicator variables used in the one-shot static resource-allocation problem in [6]. • θm = {0, 1}, ∀m ∈ M, τ ∈ [0, bτ] are indicator variables that specify whether agent m is active (i.e., executing its MDP) at time τ. The meaning of resource-usage variables Δ is illustrated in Figure 1: Δm(τ, ω) = 1 only if resource ω is allocated to agent m at time τ. The meaning of the activity indicators θ is illustrated in Figure 2b: when agent m is in either the start state sb or the finish state sf , the corresponding θm = 0, but once the agent becomes active and enters one of the other states, we set θm = 1 . This meaning of θ can be enforced with a linear constraint that synchronizes the values of the agents'' occupation measures xm and the activity 1224 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) indicators θ, as shown in (6) in Table 1. Another constraint we have to add-because the activity indicators θ are defined on the global timeline τ-is to enforce the fact that the agent is inactive outside of its arrivaldeparture window. This is accomplished by constraint (7) in Table 1. Furthermore, agents should not be using resources while they are inactive. This constraint can also be enforced via a linear inequality on θ and Δ, as shown in (8). Constraint (6) sets the value of θ to match the policy defined by the occupation measure xm. In a similar fashion, we have to make sure that the resource-usage variables Δ are also synchronized with the occupation measure xm. This is done via constraint (9) in Table 1, which is nearly identical to the analogous constraint from [6]. After implementing the above constraint, which enforces the meaning of Δ, we add a constraint that ensures that the agents'' resource usage never exceeds the amounts of available resources. This condition is also trivially expressed as a linear inequality (10) in Table 1. Finally, for the problem formulation where resource assignments are static during a lifetime of an agent, we add a constraint that ensures that the resource-usage variables Δ do not change their value while the agent is active (θ = 1). This is accomplished via the linear constraint (11), where Z ≥ 2 is a constant that is used to turn off the constraints when θm(τ) = 0 or θm(τ + 1) = 0. This constraint is not used for the dynamic problem formulation, where resources can be reallocated between agents at every time step. To summarize, Table 1 together with the conservationof-flow constraints from (12) defines the MILP that simultaneously computes an optimal resource assignment for all agents across time as well as optimal finite-horizon MDP policies that are valid under that resource assignment. As a rough measure of the complexity of this MILP, let us consider the number of optimization variables and constraints. Let TM = P Tm = P m(τa m − τd m + 1) be the sum of the lengths of the arrival-departure windows across all agents. Then, the number of optimization variables is: TM + bτ|M||Ω| + bτ|M|, TM of which are continuous (xm), and bτ|M||Ω| + bτ|M| are binary (Δ and θ). However, notice that all but TM|M| of the θ are set to zero by constraint (7), which also immediately forces all but TM|M||Ω| of the Δ to be zero via the constraints (8). The number of constraints (not including the degenerate constraints in (7)) in the MILP is: TM + TM|Ω| + bτ|Ω| + bτ|M||Ω|. Despite the fact that the complexity of the MILP is, in the worst case, exponential1 in the number of binary variables, the complexity of this MILP is significantly (exponentially) lower than that of the MILP with flat utility functions, described in Section 2.2. This result echos the efficiency gains reported in [6] for single-shot resource-allocation problems, but is much more pronounced, because of the explosion of the flat utility representation due to the temporal aspect of the problem (recall the prohibitive complexity of the combinatorial optimization in Section 2.2). We empirically analyze the performance of this method in Section 5. 1 Strictly speaking, solving MILPs to optimality is NPcomplete in the number of integer variables. 5. EXPERIMENTAL RESULTS Although the complexity of solving MILPs is in the worst case exponential in the number of integer variables, there are many efficient methods for solving MILPs that allow our algorithm to scale well for parameters common to resource allocation and scheduling problems. In particular, this section introduces a problem domain-the repairshop problem-used to empirically evaluate our algorithm``s scalability in terms of the number of agents |M|, the number of shared resources |Ω|, and the varied lengths of global time bτ during which agents may enter and exit the system. The repairshop problem is a simple parameterized MDP adopting the metaphor of a vehicular repair shop. Agents in the repair shop are mechanics with a number of independent tasks that yield reward only when completed. In our MDP model of this system, actions taken to advance through the state space are only allowed if the agent holds certain resources that are publicly available to the shop. These resources are in finite supply, and optimal policies for the shop will determine when each agent may hold the limited resources to take actions and earn individual rewards. Each task to be completed is associated with a single action, although the agent is required to repeat the action numerous times before completing the task and earning a reward. This model was parameterized in terms of the number of agents in the system, the number of different types of resources that could be linked to necessary actions, a global time during which agents are allowed to arrive and depart, and a maximum length for the number of time steps an agent may remain in the system. All datapoints in our experiments were obtained with 20 evaluations using CPLEX to solve the MILPs on a Pentium4 computer with 2Gb of RAM. Trials were conducted on both the static and the dynamic version of the resourcescheduling problem, as defined earlier. Figure 3 shows the runtime and policy value for independent modifications to the parameter set. The top row shows how the solution time for the MILP scales as we increase the number of agents |M|, the global time horizon bτ, and the number of resources |Ω|. Increasing the number of agents leads to exponential complexity scaling, which is to be expected for an NP-complete problem. However, increasing the global time limit bτ or the total number of resource types |Ω|-while holding the number of agents constantdoes not lead to decreased performance. This occurs because the problems get easier as they become under-constrained, which is also a common phenomenon for NP-complete problems. We also observe that the solution to the dynamic version of the problem can often be computed much faster than the static version. The bottom row of Figure 3 shows the joint policy value of the policies that correspond to the computed optimal resource-allocation schedules. We can observe that the dynamic version yields higher reward (as expected, since the reward for the dynamic version is always no less than the reward of the static version). We should point out that these graphs should not be viewed as a measure of performance of two different algorithms (both algorithms produce optimal solutions but to different problems), but rather as observations about how the quality of optimal solutions change as more flexibility is allowed in the reallocation of resources. Figure 4 shows runtime and policy value for trials in which common input variables are scaled together. This allows The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1225 2 4 6 8 10 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Number of Agents |M| CPUTime,sec |Ω| = 5, τ = 50 static dynamic 50 100 150 200 10 −2 10 −1 10 0 10 1 10 2 10 3 Global Time Boundary τ CPUTime,sec |M| = 5, |Ω| = 5 static dynamic 10 20 30 40 50 10 −2 10 −1 10 0 10 1 10 2 Number of Resources |Ω| CPUTime,sec |M| = 5, τ = 50 static dynamic 2 4 6 8 10 200 400 600 800 1000 1200 1400 1600 Number of Agents |M| Value |Ω| = 5, τ = 50 static dynamic 50 100 150 200 400 500 600 700 800 900 1000 1100 1200 1300 1400 Global Time Boundary τ Value |M| = 5, |Ω| = 5 static dynamic 10 20 30 40 50 500 600 700 800 900 1000 1100 1200 1300 1400 Number of Resources |Ω| Value |M| = 5, τ = 50 static dynamic Figure 3: Evaluation of our MILP for variable numbers of agents (column 1), lengths of global-time window (column 2), and numbers of resource types (column 3). Top row shows CPU time, and bottom row shows the joint reward of agents'' MDP policies. Error bars show the 1st and 3rd quartiles (25% and 75%). 2 4 6 8 10 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 Number of Agents |M| CPUTime,sec τ = 10|M| static dynamic 2 4 6 8 10 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Number of Agents |M| CPUTime,sec |Ω| = 2|M| static dynamic 2 4 6 8 10 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Number of Agents |M| CPUTime,sec |Ω| = 5|M| static dynamic 2 4 6 8 10 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 Number of Agents |M| Value τ = 10|M| static dynamic 2 4 6 8 10 200 400 600 800 1000 1200 1400 1600 1800 2000 Number of Agents |M| Value |Ω| = 2|M| static dynamic 2 4 6 8 10 0 500 1000 1500 2000 2500 Number of Agents |M| Value |Ω| = 5|M| static dynamic Figure 4: Evaluation of our MILP using correlated input variables. The left column tracks the performance and CPU time as the number of agents and global-time window increase together (bτ = 10|M|). The middle and the right column track the performance and CPU time as the number of resources and the number of agents increase together as |Ω| = 2|M| and |Ω| = 5|M|, respectively. Error bars show the 1st and 3rd quartiles (25% and 75%). 1226 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) us to explore domains where the total number of agents scales proportionally to the total number of resource types or the global time horizon, while keeping constant the average agent density (per unit of global time) or the average number of resources per agent (which commonly occurs in real-life applications). Overall, we believe that these experimental results indicate that our MILP formulation can be used to effectively solve resource-scheduling problems of nontrivial size. 6. DISCUSSION AND CONCLUSIONS Throughout the paper, we have made a number of assumptions in our model and solution algorithm; we discuss their implications below. • Continual execution. We assume that once an agent stops executing its MDP (transitions into state sf ), it exits the system and cannot return. It is easy to relax this assumption for domains where agents'' MDPs can be paused and restarted. All that is required is to include an additional pause action which transitions from a given state back to itself, and has zero reward. • Indifference to start time. We used a reward model where agents'' rewards depend only on the time horizon of their MDPs and not the global start time. This is a consequence of our MDP-augmentation procedure from Section 4.1. It is easy to extend the model so that the agents incur an explicit penalty for idling by assigning a non-zero negative reward to the start state sb . • Binary resource requirements. For simplicity, we have assumed that resource costs are binary: ϕm(a, ω) = {0, 1}, but our results generalize in a straightforward manner to non-binary resource mappings, analogously to the procedure used in [5]. • Cooperative agents. The optimization procedure discussed in this paper was developed in the context of cooperative agents, but it can also be used to design a mechanism for scheduling resources among self-interested agents. This optimization procedure can be embedded in a VickreyClarke-Groves auction, completely analogously to the way it was done in [7]. In fact, all the results of [7] about the properties of the auction and information privacy directly carry over to the scheduling domain discussed in this paper, requiring only slight modifications to deal with finitehorizon MDPs. • Known, deterministic arrival and departure times. Finally, we have assumed that agents'' arrival and departure times (τa m and τd m) are deterministic and known a priori. This assumption is fundamental to our solution method. While there are many domains where this assumption is valid, in many cases agents arrive and depart dynamically and their arrival and departure times can only be predicted probabilistically, leading to online resource-allocation problems. In particular, in the case of self-interested agents, this becomes an interesting version of an online-mechanism-design problem [11, 12]. In summary, we have presented an MILP formulation for the combinatorial resource-scheduling problem where agents'' values for possible resource assignments are defined by finitehorizon MDPs. This result extends previous work ([6, 7]) on static one-shot resource allocation under MDP-induced preferences to resource-scheduling problems with a temporal aspect. As such, this work takes a step in the direction of designing an online mechanism for agents with combinatorial resource preferences induced by stochastic planning problems. Relaxing the assumption about deterministic arrival and departure times of the agents is a focus of our future work. We would like to thank the anonymous reviewers for their insightful comments and suggestions. 7. REFERENCES [1] E. Altman and A. Shwartz. Adaptive control of constrained Markov chains: Criteria and policies. Annals of Operations Research, special issue on Markov Decision Processes, 28:101-134, 1991. [2] R. Bellman. Dynamic Programming. Princeton University Press, 1957. [3] C. Boutilier. Solving concisely expressed combinatorial auction problems. In Proc. of AAAI-02, pages 359-366, 2002. [4] C. Boutilier and H. H. Hoos. Bidding languages for combinatorial auctions. In Proc. of IJCAI-01, pages 1211-1217, 2001. [5] D. Dolgov. Integrated Resource Allocation and Planning in Stochastic Multiagent Environments. PhD thesis, Computer Science Department, University of Michigan, February 2006. [6] D. A. Dolgov and E. H. Durfee. Optimal resource allocation and policy formulation in loosely-coupled Markov decision processes. In Proc. of ICAPS-04, pages 315-324, June 2004. [7] D. A. Dolgov and E. H. Durfee. Computationally efficient combinatorial auctions for resource allocation in weakly-coupled MDPs. In Proc. of AAMAS-05, New York, NY, USA, 2005. ACM Press. [8] D. A. Dolgov and E. H. Durfee. Resource allocation among agents with preferences induced by factored MDPs. In Proc. of AAMAS-06, 2006. [9] K. Larson and T. Sandholm. Mechanism design and deliberative agents. In Proc. of AAMAS-05, pages 650-656, New York, NY, USA, 2005. ACM Press. [10] N. Nisan. Bidding and allocation in combinatorial auctions. In Electronic Commerce, 2000. [11] D. C. Parkes and S. Singh. An MDP-based approach to Online Mechanism Design. In Proc. of the Seventeenths Annual Conference on Neural Information Processing Systems (NIPS-03), 2003. [12] D. C. Parkes, S. Singh, and D. Yanovsky. Approximately efficient online mechanism design. In Proc. of the Eighteenths Annual Conference on Neural Information Processing Systems (NIPS-04), 2004. [13] M. L. Puterman. Markov Decision Processes. John Wiley & Sons, New York, 1994. [14] M. H. Rothkopf, A. Pekec, and R. M. Harstad. Computationally manageable combinational auctions. Management Science, 44(8):1131-1147, 1998. [15] T. Sandholm. An algorithm for optimal winner determination in combinatorial auctions. In Proc. of IJCAI-99, pages 542-547, San Francisco, CA, USA, 1999. Morgan Kaufmann Publishers Inc.. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1227
Combinatorial Resource Scheduling for Multiagent MDPs ABSTRACT Optimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive. We consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs). In recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs. However, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs. We extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods. We provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time. We illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain. 1. INTRODUCTION The tasks of optimal resource allocation and scheduling are ubiquitous in multiagent systems, but solving such optimization problems can be computationally difficult, due to a number of factors. In particular, when the value of a set of resources to an agent is not additive (as is often the case with resources that are substitutes or complements), the utility function might have to be defined on an exponentially large space of resource bundles, which very quickly becomes computationally intractable. Further, even when each agent has a utility function that is nonzero only on a small subset of the possible resource bundles, obtaining optimal allocation is still computationally prohibitive, as the problem becomes NP-complete [14]. Such computational issues have recently spawned several threads of work in using compact models of agents' preferences. One idea is to use any structure present in utility functions to represent them compactly, via, for example, logical formulas [15, 10, 4, 3]. An alternative is to directly model the mechanisms that define the agents' utility functions and perform resource allocation directly with these models [9]. A way of accomplishing this is to model the processes by which an agent might utilize the resources and define the utility function as the payoff of these processes. In particular, if an agent uses resources to act in a stochastic environment, its utility function can be naturally modeled with a Markov decision process, whose action set is parameterized by the available resources. This representation can then be used to construct very efficient resource-allocation algorithms that lead to an exponential speedup over a straightforward optimization problem with flat representations of combinatorial preferences [6, 7, 8]. However, this existing work on resource allocation with preferences induced by resource-parameterized MDPs makes an assumption that the resources are only allocated once and are then utilized by the agents independently within their infinite-horizon MDPs. This assumption that no reallocation of resources is possible can be limiting in domains where agents arrive and depart dynamically. In this paper, we extend the work on resource allocation under MDP-induced preferences to discrete-time scheduling problems, where agents are present in the system for finite time intervals and can only use resources within these intervals. In particular, agents arrive and depart at arbitrary (predefined) times and within these intervals use resources to execute tasks in finite-horizon MDPs. We address the problem of globally optimal resource scheduling, where the objective is to find an allocation of resources to the agents across time that maximizes the sum of the expected rewards that they obtain. In this context, our main contribution is a mixed-integerprogramming formulation of the scheduling problem that chooses globally optimal resource assignments, starting times, and execution horizons for all agents (within their arrival departure intervals). We analyze and empirically compare two flavors of the scheduling problem: one, where agents have static resource assignments within their finite-horizon MDPs, and another, where resources can be dynamically reallocated between agents at every time step. In the rest of the paper, we first lay down the necessary groundwork in Section 2 and then introduce our model and formal problem statement in Section 3. In Section 4.2, we describe our main result, the optimization program for globally optimal resource scheduling. Following the discussion of our experimental results on a job-scheduling problem in Section 5, we conclude in Section 6 with a discussion of possible extensions and generalizations of our method. 2. BACKGROUND Similarly to the model used in previous work on resourceallocation with MDP-induced preferences [6, 7], we define the value of a set of resources to an agent as the value of the best MDP policy that is realizable, given those resources. However, since the focus of our work is on scheduling problems, and a large part of the optimization problem is to decide how resources are allocated in time among agents with finite arrival and departure times, we model the agents' planning problems as finite-horizon MDPs, in contrast to previous work that used infinite-horizon discounted MDPs. In the rest of this section, we first introduce some necessary background on finite-horizon MDPs and present a linear-programming formulation that serves as the basis for our solution algorithm developed in Section 4. We also outline the standard methods for combinatorial resource scheduling with flat resource values, which serve as a comparison benchmark for the new model developed here. 2.1 Markov Decision Processes A stationary, finite-domain, discrete-time MDP (see, for example, [13] for a thorough and detailed development) can be described as (S, A, p, r), where: S is a finite set of system states; A is a finite set of actions that are available to the agent; p is a stationary stochastic transition function, where p (σ | s, a) is the probability of transitioning to state σ upon executing action a in state s; r is a stationary reward function, where r (s, a) specifies the reward obtained upon executing action a in state s. Given such an MDP, a decision problem under a finite horizon T is to choose an optimal action at every time step to maximize the expected value of the total reward accrued during the agent's (finite) lifetime. The agent's optimal policy is then a function of current state s and the time until the horizon. An optimal policy for such a problem is to act greedily with respect to the optimal value function, defined recursively by the following system of finite-time Bellman equations [2]: This optimal value function can be easily computed using dynamic programming, leading to the following optimal policy π, where π (s, a, t) is the probability of executing action The above is the most common way of computing the optimal value function (and therefore an optimal policy) for a finite-horizon MDP. However, we can also formulate the problem as the following linear program (similarly to the dual LP for infinite-horizon discounted MDPs [13, 6, 7]): Note that the standard unconstrained finite-horizon MDP, as described above, always has a uniformly-optimal solution (optimal for any initial distribution α (s)). Therefore, an optimal policy can be obtained by using an arbitrary constant α (s)> 0 (in particular, α (s) = 1 will result in x (s, a, t) = π (s, a, t)). However, for MDPs with resource constraints (as defined below in Section 3), uniformly-optimal policies do not in general exist. In such cases, α becomes a part of the problem input, and a resulting policy is only optimal for that particular α. This result is well known for infinite-horizon MDPs with various types of constraints [1, 6], and it also holds for our finite-horizon model, which can be easily established via a line of reasoning completely analogous to the arguments in [6]. 2.2 Combinatorial Resource Scheduling A straightforward approach to resource scheduling for a set of agents M, whose values for the resources are induced by stochastic planning problems (in our case, finite-horizon MDPs) would be to have each agent enumerate all possible resource assignments over time and, for each one, compute its value by solving the corresponding MDP. Then, each agent would provide valuations for each possible resource bundle over time to a centralized coordinator, who would compute the optimal resource assignments across time based on these valuations. When resources can be allocated at different times to different agents, each agent must submit valuations for every combination of possible time horizons. Let each agent m E M execute its MDP within the arrival-departure time interval τ E [τam, τdm]. Hence, agent m will execute an MDP with time horizon no greater than Tm = τd m − τ a m +1. Let bτ be the global time horizon for the problem, before which all of the agents' MDPs must finish. We assume τdm <bτ, Vm E M. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1221 For the scheduling problem where agents have static resource requirements within their finite-horizon MDPs, the agents provide a valuation for each resource bundle for each possible time horizon (from [1, Tm]) that they may use. Let Ω be the set of resources to be allocated among the agents. An agent will get at most one resource bundle for one of the time horizons. Let the variable ψ ∈ Ψm enumerate all possible pairs of resource bundles and time horizons for agent m, so there are 2 | Ω | × Tm values for ψ (the space of bundles is exponential in the number of resource types | Ω |). The agent m must provide a value vψm for each ψ, and the coordinator will allocate at most one ψ (resource, time horizon) pair to each agent. This allocation is expressed as an indicator variable zψm ∈ {0, 1} that shows whether ψ is assigned to agent m. For time τ and resource ω, the function nm (ψ, τ, ω) ∈ {0, 1} indicates whether the bundle in ψ uses resource ω at time τ (we make the assumption that agents have binary resource requirements). This allocation problem is NP-complete, even when considering only a single time step, and its difficulty increases significantly with multiple time steps because of the increasing number of values of ψ. The problem of finding an optimal allocation that satisfies the global constraint that the amount of each resource ω allocated to all agents does not exceed the available amount bϕ (ω) can be expressed as the following integer program: (3) The first constraint in equation 3 says that no agent can receive more than one bundle, and the second constraint ensures that the total assignment of resource ω does not, at any time, exceed the resource bound. For the scheduling problem where the agents are able to dynamically reallocate resources, each agent must specify a value for every combination of bundles and time steps within its time horizon. Let the variable ψ ∈ Ψm in this case enumerate all possible resource bundles for which at most one bundle may be assigned to agent m at each time step. Therefore, in this case there are Pt ∈ [1, T -] (2 | Ω |) t ∼ 2 | Ω | Tpossibilities of resource bundles assigned to different time slots, for the Tm different time horizons. The same set of equations (3) can be used to solve this dynamic scheduling problem, but the integer program is different because of the difference in how ψ is defined. In this case, the number of ψ values is exponential in each agent's planning horizon Tm, resulting in a much larger program. This straightforward approach to solving both of these of either 2 | Ω | Tm (static allocation) or P scheduling problems requires an enumeration and solution t ∈ [1, T -] 2 | Ω | t (dynamic reallocation) MDPs for each agent, which very quickly becomes intractable with the growth of the number of resources | Ω | or the time horizon Tm. 3. MODEL AND PROBLEM STATEMENT We now formally introduce our model of the resourcescheduling problem. The problem input consists of the following components: • M, Ω, bϕ, τa m, τd m, bτ are as defined above in Section 2.2. • {Θm} = {S, A, pm, rm, αm} are the MDPs of all agents m ∈ M. Without loss of generality, we assume that state and action spaces of all agents are the same, but each has its own transition function pm, reward function rm, and initial conditions αm. • ϕm: A × Ω ~ → {0, 1} is the mapping of actions to resources for agent m. ϕm (a, ω) indicates whether action a of agent m needs resource ω. An agent m that receives a set of resources that does not include resource ω cannot execute in its MDP policy any action a for which ϕm (a, ω) = ~ 0. We assume all resource requirements are binary; as discussed below in Section 6, this assumption is not limiting. Given the above input, the optimization problem we consider is to find the globally optimal--maximizing the sum of expected rewards--mapping of resources to agents for all time steps: Δ: τ × M × Ω ~ → {0, 1}. A solution is feasible if the corresponding assignment of resources to the agents does not violate the global resource constraint: We consider two flavors of the resource-scheduling problem. The first formulation restricts resource assignments to the space where the allocation of resources to each agent is static during the agent's lifetime. The second formulation allows reassignment of resources between agents at every time step within their lifetimes. Figure 1 depicts a resource-scheduling problem with three agents M = {m1, m2, m3}, three resources Ω = {ω1, ω2, ω3}, and a global problem horizon of bτ = 11. The agents' arrival and departure times are shown as gray boxes and are {1, 6}, {3, 7}, and {2, 11}, respectively. A solution to this problem is shown via horizontal bars within each agents' box, where the bars correspond to the allocation of the three resource types. Figure 1a shows a solution to a static scheduling problem. According to the shown solution, agent m1 begins the execution of its MDP at time τ = 1 and has a lock on all three resources until it finishes execution at time τ = 3. Note that agent m1 relinquishes its hold on the resources before its announced departure time of τdm1 = 6, ostensibly because other agents can utilize the resources more effectively. Thus, at time τ = 4, resources ω1 and ω3 are allocated to agent m2, who then uses them to execute its MDP (using only actions supported by resources ω1 and ω3) until time τ = 7. Agent m3 holds resource ω3 during the interval τ ∈ [4, 10]. Figure 1b shows a possible solution to the dynamic version of the same problem. There, resources can be reallocated between agents at every time step. For example, agent m1 gives up its use of resource ω2 at time τ = 2, although it continues the execution of its MDP until time τ = 6. Notice that an agent is not allowed to stop and restart its MDP, so agent m1 is only able to continue executing in the interval τ ∈ [3, 4] if it has actions that do not require any resources (ϕm (a, ω) = 0). Clearly, the model and problem statement described above make a number of assumptions about the problem and the desired solution properties. We discuss some of those assumptions and their implications in Section 6. 1222 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Illustration of a solution to a resource-scheduling problem with three agents and three resources: a) static resource assignments (resource assignments are constant within agents' lifetimes; b) dynamic assignment (resource assignments are allowed to change at every time step). 4. RESOURCE SCHEDULING Our resource-scheduling algorithm proceeds in two stages. First, we perform a preprocessing step that augments the agent MDPs; this process is described in Section 4.1. Second, using these augmented MDPs we construct a global optimization problem, which is described in Section 4.2. 4.1 Augmenting Agents' MDPs In the model described in the previous section, we assume that if an agent does not possess the necessary resources to perform actions in its MDP, its execution is halted and the agent leaves the system. In other words, the MDPs cannot be "paused" and "resumed". For example, in the problem shown in Figure 1a, agent m1 releases all resources after time τ = 3, at which point the execution of its MDP is halted. Similarly, agents m2 and m3 only execute their MDPs in the intervals τ E [4, 6] and τ E [4, 10], respectively. Therefore, an important part of the global decision-making problem is to decide the window of time during which each of the agents is "active" (i.e., executing its MDP). To accomplish this, we augment each agent's MDP with two new states ("start" and "finish" states sb, sf, respectively) and a new "start/stop" action a `, as illustrated in Figure 2. The idea is that an agent stays in the start state sb until it is ready to execute its MDP, at which point it performs the start/stop action a ` and transitions into the state space of the original MDP with the transition probability that corresponds to the original initial distribution α (s). For example, in Figure 1a, for agent m2 this would happen at time τ = 4. Once the agent gets to the end of its activity window (time τ = 6 for agent m2 in Figure 1a), it performs the start/stop action, which takes it into the sink finish state sf at time τ = 7. More precisely, given an MDP (S, A, pm, rm, αm), we define an augmented MDP (S', A', p' m, r' m, α'm) as follows: where all non-specified transition probabilities are assumed to be zero. Further, in order to account for the new starting state, we begin the MDP one time-step earlier, setting τam +--τam − 1. This will not affect the resource allocation due to the resource constraints only being enforced for the original MDP states, as will be discussed in the next section. For example, the augmented MDPs shown in Figure 2b (which starts in state sb at time τ = 2) would be constructed from an MDP with original arrival time τ = 3. Figure 2b also shows a sample trajectory through the state space: the agent starts in state sb, transitions into the state space S of the original MDP, and finally exists into the sink state sf. Note that if we wanted to model a problem where agents could pause their MDPs at arbitrary time steps (which might be useful for domains where dynamic reallocation is possible), we could easily accomplish this by including an extra action that transitions from each state to itself with zero reward. 4.2 MILP for Resource Scheduling Given a set of augmented MDPs, as defined above, the goal of this section is to formulate a global optimization program that solves the resource-scheduling problem. In this section and below, all MDPs are assumed to be the augmented MDPs as defined in Section 4.1. Our approach is similar to the idea used in [6]: we begin with the linear-program formulation of agents' MDPs (1) and augment it with constraints that ensure that the corresponding resource allocation across agents and time is valid. The resulting optimization problem then simultaneously solves the agents' MDPs and resource-scheduling problems. In the rest of this section, we incrementally develop a mixed integer program (MILP) that achieves this. In the absence of resource constraints, the agents' finitehorizon MDPs are completely independent, and the globally optimal solution can be trivially obtained via the following LP, which is simply an aggregation of single-agent finitehorizon LPs: where xm (s, a, t) is the occupation measure of agent m, and Figure 2: Illustration of augmenting an MDP to allow for variable starting and stopping times: a) (left) the original two-state MDP with a single action; (right) the augmented MDP with new states sb and sf and the new action a ∗ (note that the origianl transitions are not changed in the augmentation process); b) the augmented MDP displayed as a trajectory through time (grey lines indicate all transitions, while black lines indicate a given trajectory. Table 1: MILP for globally optimal resource scheduling. Tm = τdm − τam + 1 is the time horizon for the agent's MDP. Using this LP as a basis, we augment it with constraints that ensure that the resource usage implied by the agents' occupation measures {xm} does not violate the global resource requirements ϕb at any time step τ ∈ [0, bτ]. To formulate these resource constraints, we use the following binary variables: • Δm (τ, ω) = {0, 1}, ∀ m ∈ M, τ ∈ [0, bτ], ω ∈ Ω, which serve as indicator variables that define whether agent m possesses resource ω at time τ. These are analogous to the static indicator variables used in the one-shot static resource-allocation problem in [6]. • θm = {0, 1}, ∀ m ∈ M, τ ∈ [0, bτ] are indicator variables that specify whether agent m is "active" (i.e., executing its MDP) at time τ. The meaning of resource-usage variables Δ is illustrated in Figure 1: Δm (τ, ω) = 1 only if resource ω is allocated to agent m at time τ. The meaning of the "activity indicators" θ is illustrated in Figure 2b: when agent m is in either the start state sb or the finish state sf, the corresponding θm = 0, but once the agent becomes active and enters one of the other states, we set θm = 1. This meaning of θ can be enforced with a linear constraint that synchronizes the values of the agents' occupation measures xm and the activity 1224 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) indicators 0, as shown in (6) in Table 1. Another constraint we have to add--because the activity indicators 0 are defined on the global timeline - r--is to enforce the fact that the agent is inactive outside of its arrivaldeparture window. This is accomplished by constraint (7) in Table 1. Furthermore, agents should not be using resources while they are inactive. This constraint can also be enforced via a linear inequality on 0 and Δ, as shown in (8). Constraint (6) sets the value of 0 to match the policy defined by the occupation measure x _. In a similar fashion, we have to make sure that the resource-usage variables Δ are also synchronized with the occupation measure x _. This is done via constraint (9) in Table 1, which is nearly identical to the analogous constraint from [6]. After implementing the above constraint, which enforces the meaning of Δ, we add a constraint that ensures that the agents' resource usage never exceeds the amounts of available resources. This condition is also trivially expressed as a linear inequality (10) in Table 1. Finally, for the problem formulation where resource assignments are static during a lifetime of an agent, we add a constraint that ensures that the resource-usage variables Δ do not change their value while the agent is active (0 = 1). This is accomplished via the linear constraint (11), where Z> 2 is a constant that is used to turn off the constraints when 0 _ (- r) = 0 or 0 _ (- r + 1) = 0. This constraint is not used for the dynamic problem formulation, where resources can be reallocated between agents at every time step. To summarize, Table 1 together with the conservationof-flow constraints from (12) defines the MILP that simultaneously computes an optimal resource assignment for all agents across time as well as optimal finite-horizon MDP policies that are valid under that resource assignment. As a rough measure of the complexity of this MILP, let us consider the number of optimization variables and constraints. Let TM = ET _ = E _ (- r' _--- rd _ + 1) be the sum of the lengths of the arrival-departure windows across all agents. Then, the number of optimization variables is: TM of which are continuous (x _), and b-rIMIIΩI + b-rIMI are binary (Δ and 0). However, notice that all but TMIMI of the 0 are set to zero by constraint (7), which also immediately forces all but TMIMIIΩI of the Δ to be zero via the constraints (8). The number of constraints (not including the degenerate constraints in (7)) in the MILP is: Despite the fact that the complexity of the MILP is, in the worst case, exponential1 in the number of binary variables, the complexity of this MILP is significantly (exponentially) lower than that of the MILP with flat utility functions, described in Section 2.2. This result echos the efficiency gains reported in [6] for single-shot resource-allocation problems, but is much more pronounced, because of the explosion of the flat utility representation due to the temporal aspect of the problem (recall the prohibitive complexity of the combinatorial optimization in Section 2.2). We empirically analyze the performance of this method in Section 5. 5. EXPERIMENTAL RESULTS Although the complexity of solving MILPs is in the worst case exponential in the number of integer variables, there are many efficient methods for solving MILPs that allow our algorithm to scale well for parameters common to resource allocation and scheduling problems. In particular, this section introduces a problem domain--the repairshop problem--used to empirically evaluate our algorithm's scalability in terms of the number of agents IMI, the number of shared resources IΩI, and the varied lengths of global time b-r during which agents may enter and exit the system. The repairshop problem is a simple parameterized MDP adopting the metaphor of a vehicular repair shop. Agents in the repair shop are mechanics with a number of independent tasks that yield reward only when completed. In our MDP model of this system, actions taken to advance through the state space are only allowed if the agent holds certain resources that are publicly available to the shop. These resources are in finite supply, and optimal policies for the shop will determine when each agent may hold the limited resources to take actions and earn individual rewards. Each task to be completed is associated with a single action, although the agent is required to repeat the action numerous times before completing the task and earning a reward. This model was parameterized in terms of the number of agents in the system, the number of different types of resources that could be linked to necessary actions, a global time during which agents are allowed to arrive and depart, and a maximum length for the number of time steps an agent may remain in the system. All datapoints in our experiments were obtained with 20 evaluations using CPLEX to solve the MILPs on a Pentium4 computer with 2Gb of RAM. Trials were conducted on both the static and the dynamic version of the resourcescheduling problem, as defined earlier. Figure 3 shows the runtime and policy value for independent modifications to the parameter set. The top row shows how the solution time for the MILP scales as we increase the number of agents IMI, the global time horizon b-r, and the number of resources IΩI. Increasing the number of agents leads to exponential complexity scaling, which is to be expected for an NP-complete problem. However, increasing the global time limit b-r or the total number of resource types IΩI--while holding the number of agents constant--does not lead to decreased performance. This occurs because the problems get easier as they become under-constrained, which is also a common phenomenon for NP-complete problems. We also observe that the solution to the dynamic version of the problem can often be computed much faster than the static version. The bottom row of Figure 3 shows the joint policy value of the policies that correspond to the computed optimal resource-allocation schedules. We can observe that the dynamic version yields higher reward (as expected, since the reward for the dynamic version is always no less than the reward of the static version). We should point out that these graphs should not be viewed as a measure of performance of two different algorithms (both algorithms produce optimal solutions but to different problems), but rather as observations about how the quality of optimal solutions change as more flexibility is allowed in the reallocation of resources. Figure 4 shows runtime and policy value for trials in which common input variables are scaled together. This allows The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1225 Figure 3: Evaluation of our MILP for variable numbers of agents (column 1), lengths of global-time window (column 2), and numbers of resource types (column 3). Top row shows CPU time, and bottom row shows the joint reward of agents' MDP policies. Error bars show the 1st and 3rd quartiles (25% and 75%). us to explore domains where the total number of agents scales proportionally to the total number of resource types or the global time horizon, while keeping constant the average agent density (per unit of global time) or the average number of resources per agent (which commonly occurs in real-life applications). Overall, we believe that these experimental results indicate that our MILP formulation can be used to effectively solve resource-scheduling problems of nontrivial size.
Combinatorial Resource Scheduling for Multiagent MDPs ABSTRACT Optimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive. We consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs). In recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs. However, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs. We extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods. We provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time. We illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain. 1. INTRODUCTION The tasks of optimal resource allocation and scheduling are ubiquitous in multiagent systems, but solving such optimization problems can be computationally difficult, due to a number of factors. In particular, when the value of a set of resources to an agent is not additive (as is often the case with resources that are substitutes or complements), the utility function might have to be defined on an exponentially large space of resource bundles, which very quickly becomes computationally intractable. Further, even when each agent has a utility function that is nonzero only on a small subset of the possible resource bundles, obtaining optimal allocation is still computationally prohibitive, as the problem becomes NP-complete [14]. Such computational issues have recently spawned several threads of work in using compact models of agents' preferences. One idea is to use any structure present in utility functions to represent them compactly, via, for example, logical formulas [15, 10, 4, 3]. An alternative is to directly model the mechanisms that define the agents' utility functions and perform resource allocation directly with these models [9]. A way of accomplishing this is to model the processes by which an agent might utilize the resources and define the utility function as the payoff of these processes. In particular, if an agent uses resources to act in a stochastic environment, its utility function can be naturally modeled with a Markov decision process, whose action set is parameterized by the available resources. This representation can then be used to construct very efficient resource-allocation algorithms that lead to an exponential speedup over a straightforward optimization problem with flat representations of combinatorial preferences [6, 7, 8]. However, this existing work on resource allocation with preferences induced by resource-parameterized MDPs makes an assumption that the resources are only allocated once and are then utilized by the agents independently within their infinite-horizon MDPs. This assumption that no reallocation of resources is possible can be limiting in domains where agents arrive and depart dynamically. In this paper, we extend the work on resource allocation under MDP-induced preferences to discrete-time scheduling problems, where agents are present in the system for finite time intervals and can only use resources within these intervals. In particular, agents arrive and depart at arbitrary (predefined) times and within these intervals use resources to execute tasks in finite-horizon MDPs. We address the problem of globally optimal resource scheduling, where the objective is to find an allocation of resources to the agents across time that maximizes the sum of the expected rewards that they obtain. In this context, our main contribution is a mixed-integerprogramming formulation of the scheduling problem that chooses globally optimal resource assignments, starting times, and execution horizons for all agents (within their arrival departure intervals). We analyze and empirically compare two flavors of the scheduling problem: one, where agents have static resource assignments within their finite-horizon MDPs, and another, where resources can be dynamically reallocated between agents at every time step. In the rest of the paper, we first lay down the necessary groundwork in Section 2 and then introduce our model and formal problem statement in Section 3. In Section 4.2, we describe our main result, the optimization program for globally optimal resource scheduling. Following the discussion of our experimental results on a job-scheduling problem in Section 5, we conclude in Section 6 with a discussion of possible extensions and generalizations of our method. 2. BACKGROUND Similarly to the model used in previous work on resourceallocation with MDP-induced preferences [6, 7], we define the value of a set of resources to an agent as the value of the best MDP policy that is realizable, given those resources. However, since the focus of our work is on scheduling problems, and a large part of the optimization problem is to decide how resources are allocated in time among agents with finite arrival and departure times, we model the agents' planning problems as finite-horizon MDPs, in contrast to previous work that used infinite-horizon discounted MDPs. In the rest of this section, we first introduce some necessary background on finite-horizon MDPs and present a linear-programming formulation that serves as the basis for our solution algorithm developed in Section 4. We also outline the standard methods for combinatorial resource scheduling with flat resource values, which serve as a comparison benchmark for the new model developed here. 2.1 Markov Decision Processes A stationary, finite-domain, discrete-time MDP (see, for example, [13] for a thorough and detailed development) can be described as (S, A, p, r), where: S is a finite set of system states; A is a finite set of actions that are available to the agent; p is a stationary stochastic transition function, where p (σ | s, a) is the probability of transitioning to state σ upon executing action a in state s; r is a stationary reward function, where r (s, a) specifies the reward obtained upon executing action a in state s. Given such an MDP, a decision problem under a finite horizon T is to choose an optimal action at every time step to maximize the expected value of the total reward accrued during the agent's (finite) lifetime. The agent's optimal policy is then a function of current state s and the time until the horizon. An optimal policy for such a problem is to act greedily with respect to the optimal value function, defined recursively by the following system of finite-time Bellman equations [2]: This optimal value function can be easily computed using dynamic programming, leading to the following optimal policy π, where π (s, a, t) is the probability of executing action The above is the most common way of computing the optimal value function (and therefore an optimal policy) for a finite-horizon MDP. However, we can also formulate the problem as the following linear program (similarly to the dual LP for infinite-horizon discounted MDPs [13, 6, 7]): Note that the standard unconstrained finite-horizon MDP, as described above, always has a uniformly-optimal solution (optimal for any initial distribution α (s)). Therefore, an optimal policy can be obtained by using an arbitrary constant α (s)> 0 (in particular, α (s) = 1 will result in x (s, a, t) = π (s, a, t)). However, for MDPs with resource constraints (as defined below in Section 3), uniformly-optimal policies do not in general exist. In such cases, α becomes a part of the problem input, and a resulting policy is only optimal for that particular α. This result is well known for infinite-horizon MDPs with various types of constraints [1, 6], and it also holds for our finite-horizon model, which can be easily established via a line of reasoning completely analogous to the arguments in [6]. 2.2 Combinatorial Resource Scheduling A straightforward approach to resource scheduling for a set of agents M, whose values for the resources are induced by stochastic planning problems (in our case, finite-horizon MDPs) would be to have each agent enumerate all possible resource assignments over time and, for each one, compute its value by solving the corresponding MDP. Then, each agent would provide valuations for each possible resource bundle over time to a centralized coordinator, who would compute the optimal resource assignments across time based on these valuations. When resources can be allocated at different times to different agents, each agent must submit valuations for every combination of possible time horizons. Let each agent m E M execute its MDP within the arrival-departure time interval τ E [τam, τdm]. Hence, agent m will execute an MDP with time horizon no greater than Tm = τd m − τ a m +1. Let bτ be the global time horizon for the problem, before which all of the agents' MDPs must finish. We assume τdm <bτ, Vm E M. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1221 For the scheduling problem where agents have static resource requirements within their finite-horizon MDPs, the agents provide a valuation for each resource bundle for each possible time horizon (from [1, Tm]) that they may use. Let Ω be the set of resources to be allocated among the agents. An agent will get at most one resource bundle for one of the time horizons. Let the variable ψ ∈ Ψm enumerate all possible pairs of resource bundles and time horizons for agent m, so there are 2 | Ω | × Tm values for ψ (the space of bundles is exponential in the number of resource types | Ω |). The agent m must provide a value vψm for each ψ, and the coordinator will allocate at most one ψ (resource, time horizon) pair to each agent. This allocation is expressed as an indicator variable zψm ∈ {0, 1} that shows whether ψ is assigned to agent m. For time τ and resource ω, the function nm (ψ, τ, ω) ∈ {0, 1} indicates whether the bundle in ψ uses resource ω at time τ (we make the assumption that agents have binary resource requirements). This allocation problem is NP-complete, even when considering only a single time step, and its difficulty increases significantly with multiple time steps because of the increasing number of values of ψ. The problem of finding an optimal allocation that satisfies the global constraint that the amount of each resource ω allocated to all agents does not exceed the available amount bϕ (ω) can be expressed as the following integer program: (3) The first constraint in equation 3 says that no agent can receive more than one bundle, and the second constraint ensures that the total assignment of resource ω does not, at any time, exceed the resource bound. For the scheduling problem where the agents are able to dynamically reallocate resources, each agent must specify a value for every combination of bundles and time steps within its time horizon. Let the variable ψ ∈ Ψm in this case enumerate all possible resource bundles for which at most one bundle may be assigned to agent m at each time step. Therefore, in this case there are Pt ∈ [1, T -] (2 | Ω |) t ∼ 2 | Ω | Tpossibilities of resource bundles assigned to different time slots, for the Tm different time horizons. The same set of equations (3) can be used to solve this dynamic scheduling problem, but the integer program is different because of the difference in how ψ is defined. In this case, the number of ψ values is exponential in each agent's planning horizon Tm, resulting in a much larger program. This straightforward approach to solving both of these of either 2 | Ω | Tm (static allocation) or P scheduling problems requires an enumeration and solution t ∈ [1, T -] 2 | Ω | t (dynamic reallocation) MDPs for each agent, which very quickly becomes intractable with the growth of the number of resources | Ω | or the time horizon Tm. 3. MODEL AND PROBLEM STATEMENT 1222 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. RESOURCE SCHEDULING 4.1 Augmenting Agents' MDPs 4.2 MILP for Resource Scheduling 1224 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. EXPERIMENTAL RESULTS The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1225
Combinatorial Resource Scheduling for Multiagent MDPs ABSTRACT Optimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive. We consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs). In recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs. However, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs. We extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods. We provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time. We illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain. 1. INTRODUCTION The tasks of optimal resource allocation and scheduling are ubiquitous in multiagent systems, but solving such optimization problems can be computationally difficult, due to a number of factors. In particular, when the value of a set of resources to an agent is not additive (as is often the case with resources that are substitutes or complements), the utility function might have to be defined on an exponentially large space of resource bundles, which very quickly becomes computationally intractable. Further, even when each agent has a utility function that is nonzero only on a small subset of the possible resource bundles, obtaining optimal allocation is still computationally prohibitive, as the problem becomes NP-complete [14]. Such computational issues have recently spawned several threads of work in using compact models of agents' preferences. An alternative is to directly model the mechanisms that define the agents' utility functions and perform resource allocation directly with these models [9]. A way of accomplishing this is to model the processes by which an agent might utilize the resources and define the utility function as the payoff of these processes. In particular, if an agent uses resources to act in a stochastic environment, its utility function can be naturally modeled with a Markov decision process, whose action set is parameterized by the available resources. However, this existing work on resource allocation with preferences induced by resource-parameterized MDPs makes an assumption that the resources are only allocated once and are then utilized by the agents independently within their infinite-horizon MDPs. This assumption that no reallocation of resources is possible can be limiting in domains where agents arrive and depart dynamically. In this paper, we extend the work on resource allocation under MDP-induced preferences to discrete-time scheduling problems, where agents are present in the system for finite time intervals and can only use resources within these intervals. In particular, agents arrive and depart at arbitrary (predefined) times and within these intervals use resources to execute tasks in finite-horizon MDPs. We address the problem of globally optimal resource scheduling, where the objective is to find an allocation of resources to the agents across time that maximizes the sum of the expected rewards that they obtain. In this context, our main contribution is a mixed-integerprogramming formulation of the scheduling problem that chooses globally optimal resource assignments, starting times, and execution horizons for all agents (within their arrival departure intervals). We analyze and empirically compare two flavors of the scheduling problem: one, where agents have static resource assignments within their finite-horizon MDPs, and another, where resources can be dynamically reallocated between agents at every time step. In the rest of the paper, we first lay down the necessary groundwork in Section 2 and then introduce our model and formal problem statement in Section 3. In Section 4.2, we describe our main result, the optimization program for globally optimal resource scheduling. Following the discussion of our experimental results on a job-scheduling problem in Section 5, we conclude in Section 6 with a discussion of possible extensions and generalizations of our method. 2. BACKGROUND Similarly to the model used in previous work on resourceallocation with MDP-induced preferences [6, 7], we define the value of a set of resources to an agent as the value of the best MDP policy that is realizable, given those resources. However, since the focus of our work is on scheduling problems, and a large part of the optimization problem is to decide how resources are allocated in time among agents with finite arrival and departure times, we model the agents' planning problems as finite-horizon MDPs, in contrast to previous work that used infinite-horizon discounted MDPs. In the rest of this section, we first introduce some necessary background on finite-horizon MDPs and present a linear-programming formulation that serves as the basis for our solution algorithm developed in Section 4. We also outline the standard methods for combinatorial resource scheduling with flat resource values, which serve as a comparison benchmark for the new model developed here. 2.1 Markov Decision Processes The agent's optimal policy is then a function of current state s and the time until the horizon. An optimal policy for such a problem is to act greedily with respect to the optimal value function, defined recursively by the following system of finite-time Bellman equations [2]: This optimal value function can be easily computed using dynamic programming, leading to the following optimal policy π, where π (s, a, t) is the probability of executing action The above is the most common way of computing the optimal value function (and therefore an optimal policy) for a finite-horizon MDP. However, we can also formulate the problem as the following linear program (similarly to the dual LP for infinite-horizon discounted MDPs [13, 6, 7]): However, for MDPs with resource constraints (as defined below in Section 3), uniformly-optimal policies do not in general exist. In such cases, α becomes a part of the problem input, and a resulting policy is only optimal for that particular α. 2.2 Combinatorial Resource Scheduling A straightforward approach to resource scheduling for a set of agents M, whose values for the resources are induced by stochastic planning problems (in our case, finite-horizon MDPs) would be to have each agent enumerate all possible resource assignments over time and, for each one, compute its value by solving the corresponding MDP. Then, each agent would provide valuations for each possible resource bundle over time to a centralized coordinator, who would compute the optimal resource assignments across time based on these valuations. When resources can be allocated at different times to different agents, each agent must submit valuations for every combination of possible time horizons. Let each agent m E M execute its MDP within the arrival-departure time interval τ E [τam, τdm]. Hence, agent m will execute an MDP with time horizon no greater than Tm = τd m − τ a m +1. Let bτ be the global time horizon for the problem, before which all of the agents' MDPs must finish. The Sixth Intl. . Joint Conf. For the scheduling problem where agents have static resource requirements within their finite-horizon MDPs, the agents provide a valuation for each resource bundle for each possible time horizon (from [1, Tm]) that they may use. Let Ω be the set of resources to be allocated among the agents. An agent will get at most one resource bundle for one of the time horizons. Let the variable ψ ∈ Ψm enumerate all possible pairs of resource bundles and time horizons for agent m, so there are 2 | Ω | × Tm values for ψ (the space of bundles is exponential in the number of resource types | Ω |). The agent m must provide a value vψm for each ψ, and the coordinator will allocate at most one ψ (resource, time horizon) pair to each agent. This allocation is expressed as an indicator variable zψm ∈ {0, 1} that shows whether ψ is assigned to agent m. For time τ and resource ω, the function nm (ψ, τ, ω) ∈ {0, 1} indicates whether the bundle in ψ uses resource ω at time τ (we make the assumption that agents have binary resource requirements). This allocation problem is NP-complete, even when considering only a single time step, and its difficulty increases significantly with multiple time steps because of the increasing number of values of ψ. For the scheduling problem where the agents are able to dynamically reallocate resources, each agent must specify a value for every combination of bundles and time steps within its time horizon. Let the variable ψ ∈ Ψm in this case enumerate all possible resource bundles for which at most one bundle may be assigned to agent m at each time step. Therefore, in this case there are Pt ∈ [1, T -] (2 | Ω |) t ∼ 2 | Ω | Tpossibilities of resource bundles assigned to different time slots, for the Tm different time horizons. The same set of equations (3) can be used to solve this dynamic scheduling problem, but the integer program is different because of the difference in how ψ is defined. In this case, the number of ψ values is exponential in each agent's planning horizon Tm, resulting in a much larger program. This straightforward approach to solving both of these of either 2 | Ω | Tm (static allocation) or P scheduling problems requires an enumeration and solution t ∈ [1, T -] 2 | Ω | t (dynamic reallocation) MDPs for each agent, which very quickly becomes intractable with the growth of the number of resources | Ω | or the time horizon Tm.
I-77
The LOGIC Negotiation Model
Successful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC). We introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness). The intimacy is a pair of matrices that evaluate both an agent's contribution to the relationship and its opponent's contribution each from an information view and from a utilitarian view across the five LOGIC dimensions. The balance is the difference between these matrices. A relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future. The negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy.
[ "negoti", "negoti strategi", "success negoti encount", "long term relationship", "utter", "utilitarian interpret", "ontolog", "set predic", "multiag system", "logic agent architectur", "accept view", "accept criterion", "compon dialogu", "confid measur" ]
[ "P", "P", "M", "M", "U", "M", "U", "M", "U", "M", "M", "U", "U", "U" ]
The LOGIC Negotiation Model Carles Sierra Institut d``Investigacio en Intel.ligencia Artificial Spanish Scientific Research Council, UAB 08193 Bellaterra, Catalonia, Spain sierra@iiia.csic.es John Debenham Faculty of Information Technology University of Technology, Sydney NSW, Australia debenham@it.uts.edu.au ABSTRACT Successful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC). We introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness). The intimacy is a pair of matrices that evaluate both an agent``s contribution to the relationship and its opponent``s contribution each from an information view and from a utilitarian view across the five LOGIC dimensions. The balance is the difference between these matrices. A relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future. The negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems General Terms Theory 1. INTRODUCTION In this paper we propose a new negotiation model to deal with long term relationships that are founded on successive negotiation encounters. The model is grounded on results from business and psychological studies [1, 16, 9], and acknowledges that negotiation is an information exchange process as well as a utility exchange process [15, 14]. We believe that if agents are to succeed in real application domains they have to reconcile both views: informational and gametheoretical. Our aim is to model trading scenarios where agents represent their human principals, and thus we want their behaviour to be comprehensible by humans and to respect usual human negotiation procedures, whilst being consistent with, and somehow extending, game theoretical and information theoretical results. In this sense, agents are not just utility maximisers, but aim at building long lasting relationships with progressing levels of intimacy that determine what balance in information and resource sharing is acceptable to them. These two concepts, intimacy and balance are key in the model, and enable us to understand competitive and co-operative game theory as two particular theories of agent relationships (i.e. at different intimacy levels). These two theories are too specific and distinct to describe how a (business) relationship might grow because interactions have some aspects of these two extremes on a continuum in which, for example, agents reveal increasing amounts of private information as their intimacy grows. We don``t follow the ``Co-Opetition'' aproach [4] where co-operation and competition depend on the issue under negotiation, but instead we belief that the willingness to co-operate/compete affect all aspects in the negotiation process. Negotiation strategies can naturally be seen as procedures that select tactics used to attain a successful deal and to reach a target intimacy level. It is common in human settings to use tactics that compensate for unbalances in one dimension of a negotiation with unbalances in another dimension. In this sense, humans aim at a general sense of fairness in an interaction. In Section 2 we outline the aspects of human negotiation modelling that we cover in this work. Then, in Section 3 we introduce the negotiation language. Section 4 explains in outline the architecture and the concepts of intimacy and balance, and how they influence the negotiation. Section 5 contains a description of the different metrics used in the agent model including intimacy. Finally, Section 6 outlines how strategies and tactics use the LOGIC framework, intimacy and balance. 2. HUMAN NEGOTIATION Before a negotiation starts human negotiators prepare the dialogic exchanges that can be made along the five LOGIC dimensions [7]: • Legitimacy. What information is relevant to the negotiation process? What are the persuasive arguments about the fairness of the options? 1030 978-81-904262-7-5 (RPS) c 2007 IFAAMAS • Options. What are the possible agreements we can accept? • Goals. What are the underlying things we need or care about? What are our goals? • Independence. What will we do if the negotiation fails? What alternatives have we got? • Commitment. What outstanding commitments do we have? Negotiation dialogues, in this context, exchange dialogical moves, i.e. messages, with the intention of getting information about the opponent or giving away information about us along these five dimensions: request for information, propose options, inform about interests, issue promises, appeal to standards ... A key part of any negotiation process is to build a model of our opponent(s) along these dimensions. All utterances agents make during a negotiation give away information about their current LOGIC model, that is, about their legitimacy, options, goals, independence, and commitments. Also, several utterances can have a utilitarian interpretation in the sense that an agent can associate a preferential gain to them. For instance, an offer may inform our negotiation opponent about our willingness to sign a contract in the terms expressed in the offer, and at the same time the opponent can compute what is its associated expected utilitarian gain. These two views: informationbased and utility-based, are central in the model proposed in this paper. 2.1 Intimacy and Balance in relationships There is evidence from psychological studies that humans seek a balance in their negotiation relationships. The classical view [1] is that people perceive resource allocations as being distributively fair (i.e. well balanced) if they are proportional to inputs or contributions (i.e. equitable). However, more recent studies [16, 17] show that humans follow a richer set of norms of distributive justice depending on their intimacy level: equity, equality, and need. Equity being the allocation proportional to the effort (e.g. the profit of a company goes to the stock holders proportional to their investment), equality being the allocation in equal amounts (e.g. two friends eat the same amount of a cake cooked by one of them), and need being the allocation proportional to the need for the resource (e.g. in case of food scarcity, a mother gives all food to her baby). For instance, if we are in a purely economic setting (low intimacy) we might request equity for the Options dimension but could accept equality in the Goals dimension. The perception of a relation being in balance (i.e. fair) depends strongly on the nature of the social relationships between individuals (i.e. the intimacy level). In purely economical relationships (e.g., business), equity is perceived as more fair; in relations where joint action or fostering of social relationships are the goal (e.g. friends), equality is perceived as more fair; and in situations where personal development or personal welfare are the goal (e.g. family), allocations are usually based on need. We believe that the perception of balance in dialogues (in negotiation or otherwise) is grounded on social relationships, and that every dimension of an interaction between humans can be correlated to the social closeness, or intimacy, between the parties involved. According to the previous studies, the more intimacy across the five LOGIC dimensions the more the need norm is used, and the less intimacy the more the equity norm is used. This might be part of our social evolution. There is ample evidence that when human societies evolved from a hunter-gatherer structure1 to a shelterbased one2 the probability of survival increased when food was scarce. In this context, we can clearly see that, for instance, families exchange not only goods but also information and knowledge based on need, and that few families would consider their relationships as being unbalanced, and thus unfair, when there is a strong asymmetry in the exchanges (a mother explaining everything to her children, or buying toys, does not expect reciprocity). In the case of partners there is some evidence [3] that the allocations of goods and burdens (i.e. positive and negative utilities) are perceived as fair, or in balance, based on equity for burdens and equality for goods. See Table 1 for some examples of desired balances along the LOGIC dimensions. The perceived balance in a negotiation dialogue allows negotiators to infer information about their opponent, about its LOGIC stance, and to compare their relationships with all negotiators. For instance, if we perceive that every time we request information it is provided, and that no significant questions are returned, or no complaints about not receiving information are given, then that probably means that our opponent perceives our social relationship to be very close. Alternatively, we can detect what issues are causing a burden to our opponent by observing an imbalance in the information or utilitarian senses on that issue. 3. COMMUNICATION MODEL 3.1 Ontology In order to define a language to structure agent dialogues we need an ontology that includes a (minimum) repertoire of elements: a set of concepts (e.g. quantity, quality, material) organised in a is-a hierarchy (e.g. platypus is a mammal, Australian-dollar is a currency), and a set of relations over these concepts (e.g. price(beer,AUD)).3 We model ontologies following an algebraic approach [8] as: An ontology is a tuple O = (C, R, ≤, σ) where: 1. C is a finite set of concept symbols (including basic data types); 2. R is a finite set of relation symbols; 3. ≤ is a reflexive, transitive and anti-symmetric relation on C (a partial order) 4. σ : R → C+ is the function assigning to each relation symbol its arity 1 In its purest form, individuals in these societies collect food and consume it when and where it is found. This is a pure equity sharing of the resources, the gain is proportional to the effort. 2 In these societies there are family units, around a shelter, that represent the basic food sharing structure. Usually, food is accumulated at the shelter for future use. Then the food intake depends more on the need of the members. 3 Usually, a set of axioms defined over the concepts and relations is also required. We will omit this here. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1031 Element A new trading partner my butcher my boss my partner my children Legitimacy equity equity equity equality need Options equity equity equity mixeda need Goals equity need equity need need Independence equity equity equality need need Commitment equity equity equity mixed need a equity on burden, equality on good Table 1: Some desired balances (sense of fairness) examples depending on the relationship. where ≤ is the traditional is-a hierarchy. To simplify computations in the computing of probability distributions we assume that there is a number of disjoint is-a trees covering different ontological spaces (e.g. a tree for types of fabric, a tree for shapes of clothing, and so on). R contains relations between the concepts in the hierarchy, this is needed to define `objects'' (e.g. deals) that are defined as a tuple of issues. The semantic distance between concepts within an ontology depends on how far away they are in the structure defined by the ≤ relation. Semantic distance plays a fundamental role in strategies for information-based agency. How signed contracts, Commit(·), about objects in a particular semantic region, and their execution, Done(·), affect our decision making process about signing future contracts in nearby semantic regions is crucial to modelling the common sense that human beings apply in managing trading relationships. A measure [10] bases the semantic similarity between two concepts on the path length induced by ≤ (more distance in the ≤ graph means less semantic similarity), and the depth of the subsumer concept (common ancestor) in the shortest path between the two concepts (the deeper in the hierarchy, the closer the meaning of the concepts). Semantic similarity is then defined as: Sim(c, c ) = e−κ1l · eκ2h − e−κ2h eκ2h + e−κ2h where l is the length (i.e. number of hops) of the shortest path between the concepts, h is the depth of the deepest concept subsuming both concepts, and κ1 and κ2 are parameters scaling the contributions of the shortest path length and the depth respectively. 3.2 Language The shape of the language that α uses to represent the information received and the content of its dialogues depends on two fundamental notions. First, when agents interact within an overarching institution they explicitly or implicitly accept the norms that will constrain their behaviour, and accept the established sanctions and penalties whenever norms are violated. Second, the dialogues in which α engages are built around two fundamental actions: (i) passing information, and (ii) exchanging proposals and contracts. A contract δ = (a, b) between agents α and β is a pair where a and b represent the actions that agents α and β are responsible for respectively. Contracts signed by agents and information passed by agents, are similar to norms in the sense that they oblige agents to behave in a particular way, so as to satisfy the conditions of the contract, or to make the world consistent with the information passed. Contracts and Information can thus be thought of as normative statements that restrict an agent``s behaviour. Norms, contracts, and information have an obvious temporal dimension. Thus, an agent has to abide by a norm while it is inside an institution, a contract has a validity period, and a piece of information is true only during an interval in time. The set of norms affecting the behaviour of an agent defines the context that the agent has to take into account. α``s communication language has two fundamental primitives: Commit(α, β, ϕ) to represent, in ϕ, the world that α aims at bringing about and that β has the right to verify, complain about or claim compensation for any deviations from, and Done(μ) to represent the event that a certain action μ4 has taken place. In this way, norms, contracts, and information chunks will be represented as instances of Commit(·) where α and β can be individual agents or institutions. C is: μ ::= illoc(α, β, ϕ, t) | μ; μ | Let context In μ End ϕ ::= term | Done(μ) | Commit(α, β, ϕ) | ϕ ∧ ϕ | ϕ ∨ ϕ | ¬ϕ | ∀v.ϕv | ∃v.ϕv context ::= ϕ | id = ϕ | prolog clause | context; context where ϕv is a formula with free variable v, illoc is any appropriate set of illocutionary particles, `;'' means sequencing, and context represents either previous agreements, previous illocutions, the ontological working context, that is a projection of the ontological trees that represent the focus of the conversation, or code that aligns the ontological differences between the speakers needed to interpret an action a. Representing an ontology as a set predicates in Prolog is simple. The set term contains instances of the ontology concepts and relations.5 For example, we can represent the following offer: If you spend a total of more than e100 in my shop during October then I will give you a 10% discount on all goods in November, as: Offer( α, β,spent(β, α, October, X) ∧ X ≥ e100 → ∀ y. Done(Inform(ξ, α, pay(β, α, y), November)) → Commit(α, β, discount(y,10%))) ξ is an institution agent that reports the payment. 4 Without loss of generality we will assume that all actions are dialogical. 5 We assume the convention that C(c) means that c is an instance of concept C and r(c1, ... , cn) implicitly determines that ci is an instance of the concept in the i-th position of the relation r. 1032 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: The LOGIC agent architecture 4. AGENT ARCHITECTURE A multiagent system {α, β1, ... , βn, ξ, θ1, ... , θt}, contains an agent α that interacts with other argumentation agents, βi, information providing agents, θj, and an institutional agent, ξ, that represents the institution where we assume the interactions happen [2]. The institutional agent reports promptly and honestly on what actually occurs after an agent signs a contract, or makes some other form of commitment. In Section 4.1 this enables us to measure the difference between an utterance and a subsequent observation. The communication language C introduced in Section 3.2 enables us both to structure the dialogues and to structure the processing of the information gathered by agents. Agents have a probabilistic first-order internal language L used to represent a world model, Mt . A generic information-based architecture is described in detail in [15]. The LOGIC agent architecture is shown in Figure 1. Agent α acts in response to a need that is expressed in terms of the ontology. A need may be exogenous such as a need to trade profitably and may be triggered by another agent offering to trade, or endogenous such as α deciding that it owns more wine than it requires. Needs trigger α``s goal/plan proactive reasoning, while other messages are dealt with by α``s reactive reasoning.6 Each plan prepares for the negotiation by assembling the contents of a `LOGIC briefcase'' that the agent `carries'' into the negotiation7 . The relationship strategy determines which agent to negotiate with for a given need; it uses risk management analysis to preserve a strategic set of trading relationships for each mission-critical need - this is not detailed here. For each trading relationship this strategy generates a relationship target that is expressed in the LOGIC framework as a desired level of intimacy to be achieved in the long term. Each negotiation consists of a dialogue, Ψt , between two agents with agent α contributing utterance μ and the part6 Each of α``s plans and reactions contain constructors for an initial world model Mt . Mt is then maintained from percepts received using update functions that transform percepts into constraints on Mt - for details, see [14, 15]. 7 Empirical evidence shows that in human negotiation, better outcomes are achieved by skewing the opening Options in favour of the proposer. We are unaware of any empirical investigation of this hypothesis for autonomous agents in real trading scenarios. ner β contributing μ using the language described in Section 3.2. Each dialogue, Ψt , is evaluated using the LOGIC framework in terms of the value of Ψt to both α and β - see Section 5.2. The negotiation strategy then determines the current set of Options {δi}, and then the tactics, guided by the negotiation target, decide which, if any, of these Options to put forward and wraps them in argumentation dialogue - see Section 6. We now describe two of the distributions in Mt that support offer exchange. Pt (acc(α, β, χ, δ)) estimates the probability that α should accept proposal δ in satisfaction of her need χ, where δ = (a, b) is a pair of commitments, a for α and b for β. α will accept δ if: Pt (acc(α, β, χ, δ)) > c, for level of certainty c. This estimate is compounded from subjective and objective views of acceptability. The subjective estimate takes account of: the extent to which the enactment of δ will satisfy α``s need χ, how much δ is `worth'' to α, and the extent to which α believes that she will be in a position to execute her commitment a [14, 15]. Sα(β, a) is a random variable denoting α``s estimate of β``s subjective valuation of a over some finite, numerical evaluation space. The objective estimate captures whether δ is acceptable on the open market, and variable Uα(b) denotes α``s open-market valuation of the enactment of commitment b, again taken over some finite numerical valuation space. We also consider needs, the variable Tα(β, a) denotes α``s estimate of the strength of β``s motivating need for the enactment of commitment a over a valuation space. Then for δ = (a, b): Pt (acc(α, β, χ, δ)) = Pt „ Tα(β, a) Tα(α, b) ``h × „ Sα(α, b) Sα(β, a) ``g × Uα(b) Uα(a) ≥ s ! (1) where g ∈ [0, 1] is α``s greed, h ∈ [0, 1] is α``s degree of altruism, and s ≈ 1 is derived from the stance8 described in Section 6. The parameters g and h are independent. We can imagine a relationship that begins with g = 1 and h = 0. Then as the agents share increasing amounts of their information about their open market valuations g gradually reduces to 0, and then as they share increasing amounts of information about their needs h increases to 1. The basis for the acceptance criterion has thus developed from equity to equality, and then to need. Pt (acc(β, α, δ)) estimates the probability that β would accept δ, by observing β``s responses. For example, if β sends the message Offer(δ1) then α derives the constraint: {Pt (acc(β, α, δ1)) = 1} on the distribution Pt (β, α, δ), and if this is a counter offer to a former offer of α``s, δ0, then: {Pt (acc(β, α, δ0)) = 0}. In the not-atypical special case of multi-issue bargaining where the agents'' preferences over the individual issues only are known and are complementary to each other``s, maximum entropy reasoning can be applied to estimate the probability that any multi-issue δ will be acceptable to β by enumerating the possible worlds that represent β``s limit of acceptability [6]. 4.1 Updating the World Model Mt α``s world model consists of probability distributions that represent its uncertainty in the world state. α is interested 8 If α chooses to inflate her opening Options then this is achieved in Section 6 by increasing the value of s. If s 1 then a deal may not be possible. This illustrates the wellknown inefficiency of bilateral bargaining established analytically by Myerson and Satterthwaite in 1983. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1033 in the degree to which an utterance accurately describes what will subsequently be observed. All observations about the world are received as utterances from an all-truthful institution agent ξ. For example, if β communicates the goal I am hungry and the subsequent negotiation terminates with β purchasing a book from α (by ξ advising α that a certain amount of money has been credited to α``s account) then α may conclude that the goal that β chose to satisfy was something other than hunger. So, α``s world model contains probability distributions that represent its uncertain expectations of what will be observed on the basis of utterances received. We represent the relationship between utterance, ϕ, and subsequent observation, ϕ , by Pt (ϕ |ϕ) ∈ Mt , where ϕ and ϕ may be ontological categories in the interest of computational feasibility. For example, if ϕ is I will deliver a bucket of fish to you tomorrow then the distribution P(ϕ |ϕ) need not be over all possible things that β might do, but could be over ontological categories that summarise β``s possible actions. In the absence of in-coming utterances, the conditional probabilities, Pt (ϕ |ϕ), should tend to ignorance as represented by a decay limit distribution D(ϕ |ϕ). α may have background knowledge concerning D(ϕ |ϕ) as t → ∞, otherwise α may assume that it has maximum entropy whilst being consistent with the data. In general, given a distribution, Pt (Xi), and a decay limit distribution D(Xi), Pt (Xi) decays by: Pt+1 (Xi) = Δi(D(Xi), Pt (Xi)) (2) where Δi is the decay function for the Xi satisfying the property that limt→∞ Pt (Xi) = D(Xi). For example, Δi could be linear: Pt+1 (Xi) = (1 − νi) × D(Xi) + νi × Pt (Xi), where νi < 1 is the decay rate for the i``th distribution. Either the decay function or the decay limit distribution could also be a function of time: Δt i and Dt (Xi). Suppose that α receives an utterance μ = illoc(α, β, ϕ, t) from agent β at time t. Suppose that α attaches an epistemic belief Rt (α, β, μ) to μ - this probability takes account of α``s level of personal caution. We model the update of Pt (ϕ |ϕ) in two cases, one for observations given ϕ, second for observations given φ in the semantic neighbourhood of ϕ. 4.2 Update of Pt (ϕ |ϕ) given ϕ First, if ϕk is observed then α may set Pt+1 (ϕk|ϕ) to some value d where {ϕ1, ϕ2, ... , ϕm} is the set of all possible observations. We estimate the complete posterior distribution Pt+1 (ϕ |ϕ) by applying the principle of minimum relative entropy9 as follows. Let p(μ) be the distribution: 9 Given a probability distribution q, the minimum relative entropy distribution p = (p1, ... , pI ) subject to a set of J linear constraints g = {gj(p) = aj · p − cj = 0}, j = 1, ... , J (that must include the constraint P i pi − 1 = 0) is: p = arg minr P j rj log rj qj . This may be calculated by introducing Lagrange multipliers λ: L(p, λ) = P j pj log pj qj + λ · g. Minimising L, { ∂L ∂λj = gj(p) = 0}, j = 1, ... , J is the set of given constraints g, and a solution to ∂L ∂pi = 0, i = 1, ... , I leads eventually to p. Entropy-based inference is a form of Bayesian inference that is convenient when the data is sparse [5] and encapsulates common-sense reasoning [12]. arg minx P j xj log xj Pt(ϕ |ϕ)j that satisfies the constraint p(μ)k = d. Then let q(μ) be the distribution: q(μ) = Rt (α, β, μ) × p(μ) + (1 − Rt (α, β, μ)) × Pt (ϕ |ϕ) and then let: r(μ) = ( q(μ) if q(μ) is more interesting than Pt (ϕ |ϕ) Pt (ϕ |ϕ) otherwise A general measure of whether q(μ) is more interesting than Pt (ϕ |ϕ) is: K(q(μ) D(ϕ |ϕ)) > K(Pt (ϕ |ϕ) D(ϕ |ϕ)), where K(x y) = P j xj ln xj yj is the Kullback-Leibler distance between two probability distributions x and y [11]. Finally incorporating Eqn. 2 we obtain the method for updating a distribution Pt (ϕ |ϕ) on receipt of a message μ: Pt+1 (ϕ |ϕ) = Δi(D(ϕ |ϕ), r(μ)) (3) This procedure deals with integrity decay, and with two probabilities: first, the probability z in the utterance μ, and second the belief Rt (α, β, μ) that α attached to μ. 4.3 Update of Pt (φ |φ) given ϕ The sim method: Given as above μ = illoc(α, β, ϕ, t) and the observation ϕk we define the vector t by ti = Pt (φi|φ) + (1− | Sim(ϕk, ϕ) − Sim(φi, φ) |) · Sim(ϕk, φ) with {φ1, φ2, ... , φp} the set of all possible observations in the context of φ and i = 1, ... , p. t is not a probability distribution. The multiplying factor Sim(ϕ , φ) limits the variation of probability to those formulae whose ontological context is not too far away from the observation. The posterior Pt+1 (φ |φ) is obtained with Equation 3 with r(μ) defined to be the normalisation of t. The valuation method: For a given φk, wexp (φk) =Pm j=1 Pt (φj|φk) · w(φj) is α``s expectation of the value of what will be observed given that β has stated that φk will be observed, for some measure w. Now suppose that, as before, α observes ϕk after agent β has stated ϕ. α revises the prior estimate of the expected valuation wexp (φk) in the light of the observation ϕk to: (wrev (φk) | (ϕk|ϕ)) = g(wexp (φk), Sim(φk, ϕ), w(φk), w(ϕ), wi(ϕk)) for some function g - the idea being, for example, that if the execution, ϕk, of the commitment, ϕ, to supply cheese was devalued then α``s expectation of the value of a commitment, φ, to supply wine should decrease. We estimate the posterior by applying the principle of minimum relative entropy as for Equation 3, where the distribution p(μ) = p(φ |φ) satisfies the constraint: p X j=1 p(ϕ ,ϕ)j · wi(φj) = g(wexp (φk), Sim(φk, ϕ), w(φk), w(ϕ), wi(ϕk)) 5. SUMMARY MEASURES A dialogue, Ψt , between agents α and β is a sequence of inter-related utterances in context. A relationship, Ψ∗t , is a sequence of dialogues. We first measure the confidence that an agent has for another by observing, for each utterance, the difference between what is said (the utterance) and what 1034 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) subsequently occurs (the observation). Second we evaluate each dialogue as it progresses in terms of the LOGIC framework - this evaluation employs the confidence measures. Finally we define the intimacy of a relationship as an aggregation of the value of its component dialogues. 5.1 Confidence Confidence measures generalise what are commonly called trust, reliability and reputation measures into a single computational framework that spans the LOGIC categories. In Section 5.2 confidence measures are applied to valuing fulfilment of promises in the Legitimacy category - we formerly called this honour [14], to the execution of commitments - we formerly called this trust [13], and to valuing dialogues in the Goals category - we formerly called this reliability [14]. Ideal observations. Consider a distribution of observations that represent α``s ideal in the sense that it is the best that α could reasonably expect to observe. This distribution will be a function of α``s context with β denoted by e, and is Pt I (ϕ |ϕ, e). Here we measure the relative entropy between this ideal distribution, Pt I (ϕ |ϕ, e), and the distribution of expected observations, Pt (ϕ |ϕ). That is: C(α, β, ϕ) = 1 − X ϕ Pt I (ϕ |ϕ, e) log Pt I (ϕ |ϕ, e) Pt(ϕ |ϕ) (4) where the 1 is an arbitrarily chosen constant being the maximum value that this measure may have. This equation measures confidence for a single statement ϕ. It makes sense to aggregate these values over a class of statements, say over those ϕ that are in the ontological context o, that is ϕ ≤ o: C(α, β, o) = 1 − P ϕ:ϕ≤o Pt β(ϕ) [1 − C(α, β, ϕ)] P ϕ:ϕ≤o Pt β(ϕ) where Pt β(ϕ) is a probability distribution over the space of statements that the next statement β will make to α is ϕ. Similarly, for an overall estimate of β``s confidence in α: C(α, β) = 1 − X ϕ Pt β(ϕ) [1 − C(α, β, ϕ)] Preferred observations. The previous measure requires that an ideal distribution, Pt I (ϕ |ϕ, e), has to be specified for each ϕ. Here we measure the extent to which the observation ϕ is preferable to the original statement ϕ. Given a predicate Prefer(c1, c2, e) meaning that α prefers c1 to c2 in environment e. Then if ϕ ≤ o: C(α, β, ϕ) = X ϕ Pt (Prefer(ϕ , ϕ, o))Pt (ϕ |ϕ) and: C(α, β, o) = P ϕ:ϕ≤o Pt β(ϕ)C(α, β, ϕ) P ϕ:ϕ≤o Pt β(ϕ) Certainty in observation. Here we measure the consistency in expected acceptable observations, or the lack of expected uncertainty in those possible observations that are better than the original statement. If ϕ ≤ o let: Φ+(ϕ, o, κ) =˘ ϕ | Pt (Prefer(ϕ , ϕ, o)) > κ ¯ for some constant κ, and: C(α, β, ϕ) = 1 + 1 B∗ · X ϕ ∈Φ+(ϕ,o,κ) Pt +(ϕ |ϕ) log Pt +(ϕ |ϕ) where Pt +(ϕ |ϕ) is the normalisation of Pt (ϕ |ϕ) for ϕ ∈ Φ+(ϕ, o, κ), B∗ = ( 1 if |Φ+(ϕ, o, κ)| = 1 log |Φ+(ϕ, o, κ)| otherwise As above we aggregate this measure for observations in a particular context o, and measure confidence as before. Computational Note. The various measures given above involve extensive calculations. For example, Eqn. 4 containsP ϕ that sums over all possible observations ϕ . We obtain a more computationally friendly measure by appealing to the structure of the ontology described in Section 3.2, and the right-hand side of Eqn. 4 may be approximated to: 1 − X ϕ :Sim(ϕ ,ϕ)≥η Pt η,I (ϕ |ϕ, e) log Pt η,I (ϕ |ϕ, e) Pt η(ϕ |ϕ) where Pt η,I (ϕ |ϕ, e) is the normalisation of Pt I (ϕ |ϕ, e) for Sim(ϕ , ϕ) ≥ η, and similarly for Pt η(ϕ |ϕ). The extent of this calculation is controlled by the parameter η. An even tighter restriction may be obtained with: Sim(ϕ , ϕ) ≥ η and ϕ ≤ ψ for some ψ. 5.2 Valuing negotiation dialogues Suppose that a negotiation commences at time s, and by time t a string of utterances, Φt = μ1, ... , μn has been exchanged between agent α and agent β. This negotiation dialogue is evaluated by α in the context of α``s world model at time s, Ms , and the environment e that includes utterances that may have been received from other agents in the system including the information sources {θi}. Let Ψt = (Φt , Ms , e), then α estimates the value of this dialogue to itself in the context of Ms and e as a 2 × 5 array Vα(Ψt ) where: Vx(Ψt ) = „ IL x (Ψt ) IO x (Ψt ) IG x (Ψt ) II x(Ψt ) IC x (Ψt ) UL x (Ψt ) UO x (Ψt ) UG x (Ψt ) UI x(Ψt ) UC x (Ψt ) `` where the I(·) and U(·) functions are information-based and utility-based measures respectively as we now describe. α estimates the value of this dialogue to β as Vβ(Ψt ) by assuming that β``s reasoning apparatus mirrors its own. In general terms, the information-based valuations measure the reduction in uncertainty, or information gain, that the dialogue gives to each agent, they are expressed in terms of decrease in entropy that can always be calculated. The utility-based valuations measure utility gain are expressed in terms of some suitable utility evaluation function U(·) that can be difficult to define. This is one reason why the utilitarian approach has no natural extension to the management of argumentation that is achieved here by our informationbased approach. For example, if α receives the utterance Today is Tuesday then this may be translated into a constraint on a single distribution, and the resulting decrease in entropy is the information gain. Attaching a utilitarian measure to this utterance may not be so simple. We use the term 2 × 5 array loosely to describe Vα in that the elements of the array are lists of measures that will be determined by the agent``s requirements. Table 2 shows a sample measure for each of the ten categories, in it the dialogue commences at time s and terminates at time t. In that Table, U(·) is a suitable utility evaluation function, needs(β, χ) means agent β needs the need χ, cho(β, χ, γ) means agent β satisfies need χ by choosing to negotiate The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1035 with agent γ, N is the set of needs chosen from the ontology at some suitable level of abstraction, Tt is the set of offers on the table at time t, com(β, γ, b) means agent β has an outstanding commitment with agent γ to execute the commitment b where b is defined in the ontology at some suitable level of abstraction, B is the number of such commitments, and there are n + 1 agents in the system. 5.3 Intimacy and Balance The balance in a negotiation dialogue, Ψt , is defined as: Bαβ(Ψt ) = Vα(Ψt ) Vβ(Ψt ) for an element-by-element difference operator that respects the structure of V (Ψt ). The intimacy between agents α and β, I∗t αβ, is the pattern of the two 2 × 5 arrays V ∗t α and V ∗t β that are computed by an update function as each negotiation round terminates, I∗t αβ = ` V ∗t α , V ∗t β ´ . If Ψt terminates at time t: V ∗t+1 x = ν × Vx(Ψt ) + (1 − ν) × V ∗t x (5) where ν is the learning rate, and x = α, β. Additionally, V ∗t x continually decays by: V ∗t+1 x = τ × V ∗t x + (1 − τ) × Dx, where x = α, β; τ is the decay rate, and Dx is a 2 × 5 array being the decay limit distribution for the value to agent x of the intimacy of the relationship in the absence of any interaction. Dx is the reputation of agent x. The relationship balance between agents α and β is: B∗t αβ = V ∗t α V ∗t β . In particular, the intimacy determines values for the parameters g and h in Equation 1. As a simple example, if both IO α (Ψ∗t ) and IO β (Ψ∗t ) increase then g decreases, and as the remaining eight information-based LOGIC components increase, h increases. The notion of balance may be applied to pairs of utterances by treating them as degenerate dialogues. In simple multi-issue bargaining the equitable information revelation strategy generalises the tit-for-tat strategy in single-issue bargaining, and extends to a tit-for-tat argumentation strategy by applying the same principle across the LOGIC framework. 6. STRATEGIES AND TACTICS Each negotiation has to achieve two goals. First it may be intended to achieve some contractual outcome. Second it will aim to contribute to the growth, or decline, of the relationship intimacy. We now describe in greater detail the contents of the Negotiation box in Figure 1. The negotiation literature consistently advises that an agent``s behaviour should not be predictable even in close, intimate relationships. The required variation of behaviour is normally described as varying the negotiation stance that informally varies from friendly guy to tough guy. The stance is shown in Figure 1, it injects bounded random noise into the process, where the bound tightens as intimacy increases. The stance, St αβ, is a 2 × 5 matrix of randomly chosen multipliers, each ≈ 1, that perturbs α``s actions. The value in the (x, y) position in the matrix, where x = I, U and y = L, O, G, I, C, is chosen at random from [ 1 l(I∗t αβ ,x,y) , l(I∗t αβ, x, y)] where l(I∗t αβ, x, y) is the bound, and I∗t αβ is the intimacy. The negotiation strategy is concerned with maintaining a working set of Options. If the set of options is empty then α will quit the negotiation. α perturbs the acceptance machinery (see Section 4) by deriving s from the St αβ matrix such as the value at the (I, O) position. In line with the comment in Footnote 7, in the early stages of the negotiation α may decide to inflate her opening Options. This is achieved by increasing the value of s in Equation 1. The following strategy uses the machinery described in Section 4. Fix h, g, s and c, set the Options to the empty set, let Dt s = {δ | Pt (acc(α, β, χ, δ) > c}, then: • repeat the following as many times as desired: add δ = arg maxx{Pt (acc(β, α, x)) | x ∈ Dt s} to Options, remove {y ∈ Dt s | Sim(y, δ) < k} for some k from Dt s By using Pt (acc(β, α, δ)) this strategy reacts to β``s history of Propose and Reject utterances. Negotiation tactics are concerned with selecting some Options and wrapping them in argumentation. Prior interactions with agent β will have produced an intimacy pattern expressed in the form of ` V ∗t α , V ∗t β ´ . Suppose that the relationship target is (T∗t α , T∗t β ). Following from Equation 5, α will want to achieve a negotiation target, Nβ(Ψt ) such that: ν · Nβ(Ψt ) + (1 − ν) · V ∗t β is a bit on the T∗t β side of V ∗t β : Nβ(Ψt ) = ν − κ ν V ∗t β ⊕ κ ν T∗t β (6) for small κ ∈ [0, ν] that represents α``s desired rate of development for her relationship with β. Nβ(Ψt ) is a 2 × 5 matrix containing variations in the LOGIC dimensions that α would like to reveal to β during Ψt (e.g. I``ll pass a bit more information on options than usual, I``ll be stronger in concessions on options, etc.). It is reasonable to expect β to progress towards her target at the same rate and Nα(Ψt ) is calculated by replacing β by α in Equation 6. Nα(Ψt ) is what α hopes to receive from β during Ψt . This gives a negotiation balance target of: Nα(Ψt ) Nβ(Ψt ) that can be used as the foundation for reactive tactics by striving to maintain this balance across the LOGIC dimensions. A cautious tactic could use the balance to bound the response μ to each utterance μ from β by the constraint: Vα(μ ) Vβ(μ) ≈ St αβ ⊗ (Nα(Ψt ) Nβ(Ψt )), where ⊗ is element-by-element matrix multiplication, and St αβ is the stance. A less neurotic tactic could attempt to achieve the target negotiation balance over the anticipated complete dialogue. If a balance bound requires negative information revelation in one LOGIC category then α will contribute nothing to it, and will leave this to the natural decay to the reputation D as described above. 7. DISCUSSION In this paper we have introduced a novel approach to negotiation that uses information and game-theoretical measures grounded on business and psychological studies. It introduces the concepts of intimacy and balance as key elements in understanding what is a negotiation strategy and tactic. Negotiation is understood as a dialogue that affect five basic dimensions: Legitimacy, Options, Goals, Independence, and Commitment. Each dialogical move produces a change in a 2×5 matrix that evaluates the dialogue along five information-based measures and five utility-based measures. The current Balance and intimacy levels and the desired, or target, levels are used by the tactics to determine what to say next. We are currently exploring the use of this model as an extension of a currently widespread eProcurement software commercialised by iSOCO, a spin-off company of the laboratory of one of the authors. 1036 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) IL α(Ψt ) = X ϕ∈Ψt Ct (α, β, ϕ) − Cs (α, β, ϕ) UL α (Ψt ) = X ϕ∈Ψt X ϕ Pt β(ϕ |ϕ) × Uα(ϕ ) IO α (Ψt ) = P δ∈T t Hs (acc(β, α, δ)) − P δ∈T t Ht (acc(β, α, δ)) |Tt| UO α (Ψt ) = X δ∈T t Pt (acc(β, α, δ)) × X δ Pt (δ |δ)Uα(δ ) IG α (Ψt ) = P χ∈N Hs (needs(β, χ)) − Ht (needs(β, χ)) |N| UG α (Ψt ) = X χ∈N Pt (needs(β, χ)) × Et (Uα(needs(β, χ))) II α(Ψt ) = Po i=1 P χ∈N Hs (cho(β, χ, βi)) − Ht (cho(β, χ, βi)) n × |N| UI α(Ψt ) = oX i=1 X χ∈N Ut (cho(β, χ, βi)) − Us (cho(β, χ, βi)) IC α (Ψt ) = Po i=1 P δ∈B Hs (com(β, βi, b)) − Ht (com(β, βi, b)) n × |B| UC α (Ψt ) = oX i=1 X δ∈B Ut (com(β, βi, b)) − Us (com(β, βi, b)) Table 2: Sample measures for each category in Vα(Ψt ). (Similarly for Vβ(Ψt ).) Acknowledgements Carles Sierra is partially supported by the OpenKnowledge European STREP project and by the Spanish IEA Project. 8. REFERENCES [1] Adams, J. S. Inequity in social exchange. In Advances in experimental social psychology, L. Berkowitz, Ed., vol. 2. New York: Academic Press, 1965. [2] Arcos, J. L., Esteva, M., Noriega, P., Rodr´ıguez, J. A., and Sierra, C. Environment engineering for multiagent systems. Journal on Engineering Applications of Artificial Intelligence 18 (2005). [3] Bazerman, M. H., Loewenstein, G. F., and White, S. B. Reversal of preference in allocation decisions: judging an alternative versus choosing among alternatives. Administration Science Quarterly, 37 (1992), 220-240. [4] Brandenburger, A., and Nalebuff, B. Co-Opetition : A Revolution Mindset That Combines Competition and Cooperation. Doubleday, New York, 1996. [5] Cheeseman, P., and Stutz, J. Bayesian Inference and Maximum Entropy Methods in Science and Engineering. American Institute of Physics, Melville, NY, USA, 2004, ch. On The Relationship between Bayesian and Maximum Entropy Inference, pp. 445461. [6] Debenham, J. Bargaining with information. In Proceedings Third International Conference on Autonomous Agents and Multi Agent Systems AAMAS-2004 (July 2004), N. Jennings, C. Sierra, L. Sonenberg, and M. Tambe, Eds., ACM Press, New York, pp. 664 - 671. [7] Fischer, R., Ury, W., and Patton, B. Getting to Yes: Negotiating agreements without giving in. Penguin Books, 1995. [8] Kalfoglou, Y., and Schorlemmer, M. IF-Map: An ontology-mapping method based on information-flow theory. In Journal on Data Semantics I, S. Spaccapietra, S. March, and K. Aberer, Eds., vol. 2800 of Lecture Notes in Computer Science. Springer-Verlag: Heidelberg, Germany, 2003, pp. 98-127. [9] Lewicki, R. J., Saunders, D. M., and Minton, J. W. Essentials of Negotiation. McGraw Hill, 2001. [10] Li, Y., Bandar, Z. A., and McLean, D. An approach for measuring semantic similarity between words using multiple information sources. IEEE Transactions on Knowledge and Data Engineering 15, 4 (July / August 2003), 871 - 882. [11] MacKay, D. Information Theory, Inference and Learning Algorithms. Cambridge University Press, 2003. [12] Paris, J. Common sense and maximum entropy. Synthese 117, 1 (1999), 75 - 93. [13] Sierra, C., and Debenham, J. An information-based model for trust. In Proceedings Fourth International Conference on Autonomous Agents and Multi Agent Systems AAMAS-2005 (Utrecht, The Netherlands, July 2005), F. Dignum, V. Dignum, S. Koenig, S. Kraus, M. Singh, and M. Wooldridge, Eds., ACM Press, New York, pp. 497 - 504. [14] Sierra, C., and Debenham, J. Trust and honour in information-based agency. In Proceedings Fifth International Conference on Autonomous Agents and Multi Agent Systems AAMAS-2006 (Hakodate, Japan, May 2006), P. Stone and G. Weiss, Eds., ACM Press, New York, pp. 1225 - 1232. [15] Sierra, C., and Debenham, J. Information-based agency. In Proceedings of Twentieth International Joint Conference on Artificial Intelligence IJCAI-07 (Hyderabad, India, January 2007), pp. 1513-1518. [16] Sondak, H., Neale, M. A., and Pinkley, R. The negotiated allocations of benefits and burdens: The impact of outcome valence, contribution, and relationship. Organizational Behaviour and Human Decision Processes, 3 (December 1995), 249-260. [17] Valley, K. L., Neale, M. A., and Mannix, E. A. Friends, lovers, colleagues, strangers: The effects of relationships on the process and outcome of negotiations. In Research in Negotiation in Organizations, R. Bies, R. Lewicki, and B. Sheppard, Eds., vol. 5. JAI Press, 1995, pp. 65-94. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1037
The LOGIC Negotiation Model ABSTRACT Successful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC). We introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness). The intimacy is a pair of matrices that evaluate both an agent's contribution to the relationship and its opponent's contribution each from an information view and from a utilitarian view across the five LOGIC dimensions. The balance is the difference between these matrices. A relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future. The negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy. 1. INTRODUCTION In this paper we propose a new negotiation model to deal with long term relationships that are founded on successive negotiation encounters. The model is grounded on results from business and psychological studies [1, 16, 9], and acknowledges that negotiation is an information exchange pro cess as well as a utility exchange process [15, 14]. We believe that if agents are to succeed in real application domains they have to reconcile both views: informational and gametheoretical. Our aim is to model trading scenarios where agents represent their human principals, and thus we want their behaviour to be comprehensible by humans and to respect usual human negotiation procedures, whilst being consistent with, and somehow extending, game theoretical and information theoretical results. In this sense, agents are not just utility maximisers, but aim at building long lasting relationships with progressing levels of intimacy that determine what balance in information and resource sharing is acceptable to them. These two concepts, intimacy and balance are key in the model, and enable us to understand competitive and co-operative game theory as two particular theories of agent relationships (i.e. at different intimacy levels). These two theories are too specific and distinct to describe how a (business) relationship might grow because interactions have some aspects of these two extremes on a continuum in which, for example, agents reveal increasing amounts of private information as their intimacy grows. We don't follow the' Co-Opetition' aproach [4] where co-operation and competition depend on the issue under negotiation, but instead we belief that the willingness to co-operate/compete affect all aspects in the negotiation process. Negotiation strategies can naturally be seen as procedures that select tactics used to attain a successful deal and to reach a target intimacy level. It is common in human settings to use tactics that compensate for unbalances in one dimension of a negotiation with unbalances in another dimension. In this sense, humans aim at a general sense of fairness in an interaction. In Section 2 we outline the aspects of human negotiation modelling that we cover in this work. Then, in Section 3 we introduce the negotiation language. Section 4 explains in outline the architecture and the concepts of intimacy and balance, and how they influence the negotiation. Section 5 contains a description of the different metrics used in the agent model including intimacy. Finally, Section 6 outlines how strategies and tactics use the LOGIC framework, intimacy and balance. 2. HUMAN NEGOTIATION Before a negotiation starts human negotiators prepare the dialogic exchanges that can be made along the five LOGIC dimensions [7]: • Legitimacy. What information is relevant to the negotiation process? What are the persuasive arguments about the fairness of the options? 1030 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS • Options. What are the possible agreements we can accept? • Goals. What are the underlying things we need or care about? What are our goals? • Independence. What will we do if the negotiation fails? What alternatives have we got? • Commitment. What outstanding commitments do we have? Negotiation dialogues, in this context, exchange dialogical moves, i.e. messages, with the intention of getting information about the opponent or giving away information about us along these five dimensions: request for information, propose options, inform about interests, issue promises, appeal to standards...A key part of any negotiation process is to build a model of our opponent (s) along these dimensions. All utterances agents make during a negotiation give away information about their current LOGIC model, that is, about their legitimacy, options, goals, independence, and commitments. Also, several utterances can have a utilitarian interpretation in the sense that an agent can associate a preferential gain to them. For instance, an offer may inform our negotiation opponent about our willingness to sign a contract in the terms expressed in the offer, and at the same time the opponent can compute what is its associated expected utilitarian gain. These two views: informationbased and utility-based, are central in the model proposed in this paper. 2.1 Intimacy and Balance in relationships There is evidence from psychological studies that humans seek a balance in their negotiation relationships. The classical view [1] is that people perceive resource allocations as being distributively fair (i.e. well balanced) if they are proportional to inputs or contributions (i.e. equitable). However, more recent studies [16, 17] show that humans follow a richer set of norms of distributive justice depending on their intimacy level: equity, equality, and need. Equity being the allocation proportional to the effort (e.g. the profit of a company goes to the stock holders proportional to their investment), equality being the allocation in equal amounts (e.g. two friends eat the same amount of a cake cooked by one of them), and need being the allocation proportional to the need for the resource (e.g. in case of food scarcity, a mother gives all food to her baby). For instance, if we are in a purely economic setting (low intimacy) we might request equity for the Options dimension but could accept equality in the Goals dimension. The perception of a relation being in balance (i.e. fair) depends strongly on the nature of the social relationships between individuals (i.e. the intimacy level). In purely economical relationships (e.g., business), equity is perceived as more fair; in relations where joint action or fostering of social relationships are the goal (e.g. friends), equality is perceived as more fair; and in situations where personal development or personal welfare are the goal (e.g. family), allocations are usually based on need. We believe that the perception of balance in dialogues (in negotiation or otherwise) is grounded on social relationships, and that every dimension of an interaction between humans can be correlated to the social closeness, or intimacy, between the parties involved. According to the previous studies, the more intimacy across the five LOGIC dimensions the more the need norm is used, and the less intimacy the more the equity norm is used. This might be part of our social evolution. There is ample evidence that when human societies evolved from a hunter-gatherer structure1 to a shelterbased one2 the probability of survival increased when food was scarce. In this context, we can clearly see that, for instance, families exchange not only goods but also information and knowledge based on need, and that few families would consider their relationships as being unbalanced, and thus unfair, when there is a strong asymmetry in the exchanges (a mother explaining everything to her children, or buying toys, does not expect reciprocity). In the case of partners there is some evidence [3] that the allocations of goods and burdens (i.e. positive and negative utilities) are perceived as fair, or in balance, based on equity for burdens and equality for goods. See Table 1 for some examples of desired balances along the LOGIC dimensions. The perceived balance in a negotiation dialogue allows negotiators to infer information about their opponent, about its LOGIC stance, and to compare their relationships with all negotiators. For instance, if we perceive that every time we request information it is provided, and that no significant questions are returned, or no complaints about not receiving information are given, then that probably means that our opponent perceives our social relationship to be very close. Alternatively, we can detect what issues are causing a burden to our opponent by observing an imbalance in the information or utilitarian senses on that issue. 3. COMMUNICATION MODEL 3.1 Ontology In order to define a language to structure agent dialogues we need an ontology that includes a (minimum) repertoire of elements: a set of concepts (e.g. quantity, quality, material) organised in a is-a hierarchy (e.g. platypus is a mammal, Australian-dollar is a currency), and a set of relations over these concepts (e.g. price (beer, AUD)).3 We model ontologies following an algebraic approach [8] as: An ontology is a tuple O = (C, R, <, σ) where: 1. C is a finite set of concept symbols (including basic data types); 2. R is a finite set of relation symbols; 3. <is a reflexive, transitive and anti-symmetric relation on C (a partial order) 4. σ: R → C + is the function assigning to each relation symbol its arity 1In its purest form, individuals in these societies collect food and consume it when and where it is found. This is a pure equity sharing of the resources, the gain is proportional to the effort. 2In these societies there are family units, around a shelter, that represent the basic food sharing structure. Usually, food is accumulated at the shelter for future use. Then the food intake depends more on the need of the members. 3Usually, a set of axioms defined over the concepts and relations is also required. We will omit this here. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1031 Element A new trading partner my butcher my boss my partner my children Legitimacy equity equity equity equality need Options equity equity equity mixeda need Goals equity need equity need need Independence equity equity equality need need Commitment equity equity equity mixed need aequity on burden, equality on good Table 1: Some desired balances (sense of fairness) examples depending on the relationship. where ≤ is the traditional is-a hierarchy. To simplify computations in the computing of probability distributions we assume that there is a number of disjoint is-a trees covering different ontological spaces (e.g. a tree for types of fabric, a tree for shapes of clothing, and so on). R contains relations between the concepts in the hierarchy, this is needed to define ` objects' (e.g. deals) that are defined as a tuple of issues. The semantic distance between concepts within an ontology depends on how far away they are in the structure defined by the ≤ relation. Semantic distance plays a fundamental role in strategies for information-based agency. How signed contracts, Commit (·), about objects in a particular semantic region, and their execution, Done (·), affect our decision making process about signing future contracts in nearby semantic regions is crucial to modelling the common sense that human beings apply in managing trading relationships. A measure [10] bases the semantic similarity between two concepts on the path length induced by ≤ (more distance in the ≤ graph means less semantic similarity), and the depth of the subsumer concept (common ancestor) in the shortest path between the two concepts (the deeper in the hierarchy, the closer the meaning of the concepts). Semantic similarity is then defined as: where l is the length (i.e. number of hops) of the shortest path between the concepts, h is the depth of the deepest concept subsuming both concepts, and κ1 and κ2 are parameters scaling the contributions of the shortest path length and the depth respectively. 3.2 Language The shape of the language that α uses to represent the information received and the content of its dialogues depends on two fundamental notions. First, when agents interact within an overarching institution they explicitly or implicitly accept the norms that will constrain their behaviour, and accept the established sanctions and penalties whenever norms are violated. Second, the dialogues in which α engages are built around two fundamental actions: (i) passing information, and (ii) exchanging proposals and contracts. A contract δ = (a, b) between agents α and β is a pair where a and b represent the actions that agents α and β are responsible for respectively. Contracts signed by agents and information passed by agents, are similar to norms in the sense that they oblige agents to behave in a particular way, so as to satisfy the conditions of the contract, or to make the world consistent with the information passed. Contracts and Information can thus be thought of as normative statements that restrict an agent's behaviour. Norms, contracts, and information have an obvious temporal dimension. Thus, an agent has to abide by a norm while it is inside an institution, a contract has a validity period, and a piece of information is true only during an interval in time. The set of norms affecting the behaviour of an agent defines the context that the agent has to take into account. α's communication language has two fundamental primitives: Commit (α, β, ϕ) to represent, in ϕ, the world that α aims at bringing about and that β has the right to verify, complain about or claim compensation for any deviations from, and Done (μ) to represent the event that a certain action μ4 has taken place. In this way, norms, contracts, and information chunks will be represented as instances of Commit (·) where α and β can be individual agents or institutions. C is: where ϕv is a formula with free variable v, illoc is any appropriate set of illocutionary particles, `;' means sequencing, and context represents either previous agreements, previous illocutions, the ontological working context, that is a projection of the ontological trees that represent the focus of the conversation, or code that aligns the ontological differences between the speakers needed to interpret an action a. Representing an ontology as a set predicates in Prolog is simple. The set term contains instances of the ontology concepts and relations .5 For example, we can represent the following offer: "If you spend a total of more than $100 in my shop during October then I will give you a 10% discount on all goods in November", as: ξ is an institution agent that reports the payment. 4Without loss of generality we will assume that all actions are dialogical. 5We assume the convention that C (c) means that c is an instance of concept C and r (c1,..., cn) implicitly determines that ci is an instance of the concept in the i-th position of the relation r. 1032 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: The LOGIC agent architecture 4. AGENT ARCHITECTURE A multiagent system {α,,31,...,,3 n, ξ, B1,..., Bt}, contains an agent α that interacts with other argumentation agents,,3 i, information providing agents, Bj, and an institutional agent, ξ, that represents the institution where we assume the interactions happen [2]. The institutional agent reports promptly and honestly on what actually occurs after an agent signs a contract, or makes some other form of commitment. In Section 4.1 this enables us to measure the difference between an utterance and a subsequent observation. The communication language C introduced in Section 3.2 enables us both to structure the dialogues and to structure the processing of the information gathered by agents. Agents have a probabilistic first-order internal language L used to represent a world model, Mt. A generic information-based architecture is described in detail in [15]. The LOGIC agent architecture is shown in Figure 1. Agent α acts in response to a need that is expressed in terms of the ontology. A need may be exogenous such as a need to trade profitably and may be triggered by another agent offering to trade, or endogenous such as α deciding that it owns more wine than it requires. Needs trigger α's goal/plan proactive reasoning, while other messages are dealt with by α's reactive reasoning .6 Each plan prepares for the negotiation by assembling the contents of a ` LOGIC briefcase' that the agent ` carries' into the negotiation7. The relationship strategy determines which agent to negotiate with for a given need; it uses risk management analysis to preserve a strategic set of trading relationships for each mission-critical need--this is not detailed here. For each trading relationship this strategy generates a relationship target that is expressed in the LOGIC framework as a desired level of intimacy to be achieved in the long term. Each negotiation consists of a dialogue, Ψt, between two agents with agent α contributing utterance µ and the part6Each of α's plans and reactions contain constructors for an initial world model Mt. Mt is then maintained from percepts received using update functions that transform percepts into constraints on Mt--for details, see [14, 15]. 7Empirical evidence shows that in human negotiation, better outcomes are achieved by skewing the opening Options in favour of the proposer. We are unaware of any empirical investigation of this hypothesis for autonomous agents in real trading scenarios. ner,3 contributing µ' using the language described in Section 3.2. Each dialogue, Ψt, is evaluated using the LOGIC framework in terms of the value of Ψt to both α and,3--see Section 5.2. The negotiation strategy then determines the current set of Options {5i}, and then the tactics, guided by the negotiation target, decide which, if any, of these Options to put forward and wraps them in argumentation dialogue--see Section 6. We now describe two of the distributions in Mt that support offer exchange. Pt (acc (α,,3, x, 5)) estimates the probability that α should accept proposal 5 in satisfaction of her need x, where 5 = (a, b) is a pair of commitments, a for α and b for,3. α will accept 5 if: Pt (acc (α,,3, x, 5))> c, for level of certainty c. This estimate is compounded from subjective and objective views of acceptability. The subjective estimate takes account of: the extent to which the enactment of 5 will satisfy α's need x, how much 5 is ` worth' to α, and the extent to which α believes that she will be in a position to execute her commitment a [14, 15]. Sα (,3, a) is a random variable denoting α's estimate of,3's subjective valuation of a over some finite, numerical evaluation space. The objective estimate captures whether 5 is acceptable on the open market, and variable Uα (b) denotes α's open-market valuation of the enactment of commitment b, again taken over some finite numerical valuation space. We also consider needs, the variable Tα (,3, a) denotes α's estimate of the strength of,3's motivating need for the enactment of commitment a over a valuation space. Then for 5 = (a, b): Pt (acc (α,,3, x, 5)) = where g ∈ [0, 1] is α's greed, h ∈ [0, 1] is α's degree of altruism, and s ≈ 1 is derived from the stance8 described in Section 6. The parameters g and h are independent. We can imagine a relationship that begins with g = 1 and h = 0. Then as the agents share increasing amounts of their information about their open market valuations g gradually reduces to 0, and then as they share increasing amounts of information about their needs h increases to 1. The basis for the acceptance criterion has thus developed from equity to equality, and then to need. Pt (acc (,3, α, 5)) estimates the probability that,3 would accept 5, by observing,3's responses. For example, if,3 sends the message Offer (51) then α derives the constraint: {Pt (acc (,3, α, 51)) = 1} on the distribution Pt (,3, α, 5), and if this is a counter offer to a former offer of α's, 50, then: {Pt (acc (,3, α, 50)) = 0}. In the not-atypical special case of multi-issue bargaining where the agents' preferences over the individual issues only are known and are complementary to each other's, maximum entropy reasoning can be applied to estimate the probability that any multi-issue 5 will be acceptable to,3 by enumerating the possible worlds that represent,3's "limit of acceptability" [6]. 4.1 Updating the World Model Mt α's world model consists of probability distributions that represent its uncertainty in the world state. α is interested 8If α chooses to inflate her opening Options then this is achieved in Section 6 by increasing the value of s. If s" 1 then a deal may not be possible. This illustrates the wellknown inefficiency of bilateral bargaining established analytically by Myerson and Satterthwaite in 1983. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1033 in the degree to which an utterance accurately describes what will subsequently be observed. All observations about the world are received as utterances from an all-truthful institution agent ξ. For example, if β communicates the goal "I am hungry" and the subsequent negotiation terminates with β purchasing a book from α (by ξ advising α that a certain amount of money has been credited to α's account) then α may conclude that the goal that β chose to satisfy was something other than hunger. So, α's world model contains probability distributions that represent its uncertain expectations of what will be observed on the basis of utterances received. We represent the relationship between utterance, ϕ, and subsequent observation, ϕ ~, by Pt (ϕ ~ Iϕ) E Mt, where ϕ ~ and ϕ may be ontological categories in the interest of computational feasibility. For example, if ϕ is "I will deliver a bucket of fish to you tomorrow" then the distribution P (ϕ ~ Iϕ) need not be over all possible things that β might do, but could be over ontological categories that summarise β's possible actions. In the absence of in-coming utterances, the conditional probabilities, Pt (ϕ ~ Iϕ), should tend to ignorance as represented by a decay limit distribution D (ϕ ~ Iϕ). α may have background knowledge concerning D (ϕ ~ Iϕ) as t--+ oc, otherwise α may assume that it has maximum entropy whilst being consistent with the data. In general, given a distribution, Pt (Xi), and a decay limit distribution D (Xi), Pt (Xi) decays by: where Δi is the decay function for the Xi satisfying the property that limt → ∞ Pt (Xi) = D (Xi). For example, Δi could be linear: Pt +1 (Xi) = (1--νi) x D (Xi) + νi x Pt (Xi), where νi <1 is the decay rate for the i' th distribution. Either the decay function or the decay limit distribution could also be a function of time: Δti and Dt (Xi). Suppose that α receives an utterance μ = illoc (α, β, ϕ, t) from agent β at time t. Suppose that α attaches an epistemic belief Rt (α, β, μ) to μ--this probability takes account of α's level of personal caution. We model the update of Pt (ϕ ~ Iϕ) in two cases, one for observations given ϕ, second for observations given φ in the semantic neighbourhood of y is the Kullback-Leibler distance between two probability distributions x ~ and y ~ [11]. Finally incorporating Eqn. 2 we obtain the method for updating a distribution Pt (ϕ ~ Iϕ) on receipt of a message μ: This procedure deals with integrity decay, and with two probabilities: first, the probability z in the utterance μ, and second the belief Rt (α, β, μ) that α attached to μ. 4.3 Update of Pt (φ ~ Iφ) given ϕ The sim method: Given as above μ = illoc (α, β, ϕ, t) and the observation ϕk we define the vector t ~ by with 1φ1, φ2,..., φp1 the set of all possible observations in the context of φ and i = 1,..., p. t ~ is not a probability distribution. The multiplying factor Sim (ϕ ~, φ) limits the variation of probability to those formulae whose ontological context is not too far away from the observation. The posterior Pt +1 (φ ~ Iφ) is obtained with Equation 3 with ~ r (μ) defined to be the normalisation of ~ t. The valuation method: For a given φk, wexp (φk) = Pm j = 1 Pt (φjIφk) • w (φj) is α's expectation of the value of what will be observed given that β has stated that φk will be observed, for some measure w. Now suppose that, as before, α observes ϕk after agent β has stated ϕ. α revises the prior estimate of the expected valuation wexp (φk) in the light of the observation ϕk to: 4.2 Update of Pt (ϕ ~ Iϕ) given ϕ First, if ϕk is observed then α may set Pt +1 (ϕkIϕ) to some value d where 1ϕ1, ϕ2,..., ϕm1 is the set of all possible observations. We estimate the complete posterior distribution Pt +1 (ϕ ~ Iϕ) by applying the principle of minimum relative entropy9 as follows. Let ~ p (μ) be the distribution: 9Given a probability distribution ~ q, the minimum relative entropy distribution p ~ = (p1,..., pI) subject to a set of J linear constraints g ~ = {gj (~ p) = ~ aj • p ~--cj = 01, j = 1,..., J (that must include the constraint Pi pi--1 = 0) is: p ~ = q. This may be calculated by introducing Lagrange multipliers ~ λ: L (~ p, ~ λ) = Pj pj log p q + λ ~ • ~ g. Minimising L, {∂ L ∂ λ = gj (~ p) = 01, j = 1,..., J is the set of given constraints ~ g, and a solution to ∂ L ∂ pi = 0, i = 1,..., I leads eventually to ~ p. Entropy-based inference is a form of Bayesian inference that is convenient when the data is sparse [5] and encapsulates common-sense reasoning [12]. for some function g ~--the idea being, for example, that if the execution, ϕk, of the commitment, ϕ, to supply cheese was devalued then α's expectation of the value of a commitment, φ, to supply wine should decrease. We estimate the posterior by applying the principle of minimum relative entropy as for Equation 3, where the distribution ~ p (μ) = ~ p (φ | φ) satisfies the constraint: 5. SUMMARY MEASURES A dialogue, Ψt, between agents α and β is a sequence of inter-related utterances in context. A relationship, Ψ ∗ t, is a sequence of dialogues. We first measure the confidence that an agent has for another by observing, for each utterance, the difference between what is said (the utterance) and what 1034 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) subsequently occurs (the observation). Second we evaluate each dialogue as it progresses in terms of the LOGIC framework--this evaluation employs the confidence measures. Finally we define the intimacy of a relationship as an aggregation of the value of its component dialogues. 5.1 Confidence Confidence measures generalise what are commonly called trust, reliability and reputation measures into a single computational framework that spans the LOGIC categories. In Section 5.2 confidence measures are applied to valuing fulfilment of promises in the Legitimacy category--we formerly called this "honour" [14], to the execution of commitments--we formerly called this "trust" [13], and to valuing dialogues in the Goals category--we formerly called this "reliability" [14]. Ideal observations. Consider a distribution of observations that represent α's "ideal" in the sense that it is the best that α could reasonably expect to observe. This distribution will be a function of α's context with β denoted by e, and is PtI (ϕ ~ Iϕ, e). Here we measure the relative entropy between this ideal distribution, PtI (ϕ ~ Iϕ, e), and the distribution of expected observations, Pt (ϕ ~ Iϕ). That is: where the "1" is an arbitrarily chosen constant being the maximum value that this measure may have. This equation measures confidence for a single statement ϕ. It makes sense to aggregate these values over a class of statements, say over those ϕ that are in the ontological context o, that is ϕ <o: where Ptβ (ϕ) is a probability distribution over the space of statements that the next statement β will make to α is ϕ. Similarly, for an overall estimate of β's confidence in α: Preferred observations. The previous measure requires that an ideal distribution, PtI (ϕ ~ Iϕ, e), has to be specified for each ϕ. Here we measure the extent to which the observation ϕ ~ is preferable to the original statement ϕ. Given a predicate Prefer (c1, c2, e) meaning that α prefers c1 to c2 in environment e. Then if ϕ <o: Certainty in observation. Here we measure the consistency in expected acceptable observations, or "the lack of expected uncertainty in those possible observations that are better than the original statement". If ϕ <o let: 4D + (ϕ, o, κ) = Jϕ ~ I Pt (Prefer (ϕ ~, ϕ, o))> κ} for some constant κ, and: As above we aggregate this measure for observations in a particular context o, and measure confidence as before. Computational Note. The various measures given above involve extensive calculations. For example, Eqn. 4 contains Pϕ' that sums over all possible observations ϕ ~. We obtain a more computationally friendly measure by appealing to the structure of the ontology described in Section 3.2, and the right-hand side of Eqn. 4 may be approximated to: where Ptη, I (ϕ ~ Iϕ, e) is the normalisation of PtI (ϕ ~ Iϕ, e) for Sim (ϕ ~, ϕ)> η, and similarly for Ptη (ϕ ~ Iϕ). The extent of this calculation is controlled by the parameter η. An even tighter restriction may be obtained with: Sim (ϕ ~, ϕ)> η and ϕ ~ <ψ for some ψ. 5.2 Valuing negotiation dialogues Suppose that a negotiation commences at time s, and by time t a string of utterances, 4Dt = (μ1,..., μn) has been exchanged between agent α and agent β. This negotiation dialogue is evaluated by α in the context of α's world model at time s, Ms, and the environment e that includes utterances that may have been received from other agents in the system including the information sources 1θi}. Let Ψt = (4Dt, Ms, e), then α estimates the value of this dialogue to itself in the context of Ms and e as a 2 x 5 array Vα (Ψt) where the I (•) and U (•) functions are information-based and utility-based measures respectively as we now describe. α estimates the value of this dialogue to β as Vβ (Ψt) by assuming that β's reasoning apparatus mirrors its own. In general terms, the information-based valuations measure the reduction in uncertainty, or information gain, that the dialogue gives to each agent, they are expressed in terms of decrease in entropy that can always be calculated. The utility-based valuations measure utility gain are expressed in terms of "some suitable" utility evaluation function U (•) that can be difficult to define. This is one reason why the utilitarian approach has no natural extension to the management of argumentation that is achieved here by our informationbased approach. For example, if α receives the utterance "Today is Tuesday" then this may be translated into a constraint on a single distribution, and the resulting decrease in entropy is the information gain. Attaching a utilitarian measure to this utterance may not be so simple. We use the term "2 x 5 array" loosely to describe Vα in that the elements of the array are lists of measures that will be determined by the agent's requirements. Table 2 shows a sample measure for each of the ten categories, in it the dialogue commences at time s and terminates at time t. In that Table, U (•) is a suitable utility evaluation function, needs (β, χ) means "agent β needs the need χ", cho (β, χ, γ) means "agent β satisfies need χ by choosing to negotiate The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1035 with agent γ", N is the set of needs chosen from the ontology at some suitable level of abstraction, Tt is the set of offers on the table at time t, com (β, γ, b) means "agent β has an outstanding commitment with agent γ to execute the commitment b" where b is defined in the ontology at some suitable level of abstraction, B is the number of such commitments, and there are n + 1 agents in the system. 5.3 Intimacy and Balance The balance in a negotiation dialogue, Ψt, is defined as: Bαβ (Ψt) = Vα (Ψt) ~ Vβ (Ψt) for an element-by-element difference operator ~ that respects the structure of V (Ψt). The intimacy between agents α and β, I * tαβ, is the pattern of the two 2 × 5 arrays V * t α and V * t β that are computed by an update function as each negotiation round terminates, where ν is the learning rate, and x = α, β. Additionally, V * t x continually decays by: Vx * t +1 = τ × V * t x + (1 − τ) × Dx, where x = α, β; τ is the decay rate, and Dx is a 2 × 5 array being the decay limit distribution for the value to agent x of the intimacy of the relationship in the absence of any interaction. Dx is the reputation of agent x. The relationship balance between agents α and β is: B * t β. In particular, the intimacy determines values for the parameters g and h in Equation 1. As a simple example, if both IOα (Ψ * t) and IOβ (Ψ * t) increase then g decreases, and as the remaining eight information-based LOGIC components increase, h increases. The notion of balance may be applied to pairs of utterances by treating them as degenerate dialogues. In simple multi-issue bargaining the equitable information revelation strategy generalises the tit-for-tat strategy in single-issue bargaining, and extends to a tit-for-tat argumentation strategy by applying the same principle across the LOGIC framework. 6. STRATEGIES AND TACTICS Each negotiation has to achieve two goals. First it may be intended to achieve some contractual outcome. Second it will aim to contribute to the growth, or decline, of the relationship intimacy. We now describe in greater detail the contents of the "Negotiation" box in Figure 1. The negotiation literature consistently advises that an agent's behaviour should not be predictable even in close, intimate relationships. The required variation of behaviour is normally described as varying the negotiation stance that informally varies from "friendly guy" to "tough guy". The stance is shown in Figure 1, it injects bounded random noise into the process, where the bound tightens as intimacy increases. The stance, Stαβ, is a 2 × 5 matrix of randomly chosen multipliers, each ≈ 1, that perturbs α's actions. The value in the (x, y) position in the matrix, where x = I, U and y = L, O, G, I, C, is chosen at random from [1 αβ, x, y) is the l (I ∗ t αβ, x, y), l (I * t αβ, x, y)] where l (I * t bound, and I * t αβ is the intimacy. The negotiation strategy is concerned with maintaining a working set of Options. If the set of options is empty then α will quit the negotiation. α perturbs the acceptance machinery (see Section 4) by deriving s from the Stαβ matrix such as the value at the (I, O) position. In line with the comment in Footnote 7, in the early stages of the negotiation α may decide to inflate her opening Options. This is achieved by increasing the value of s in Equation 1. The following strategy uses the machinery described in Section 4. Fix h, g, s and c, set the Options to the empty set, let Dts = {δ | Pt (acc (α, β, χ, δ)> c}, then: • repeat the following as many times as desired: add δ = arg maxx {Pt (acc (β, α, x)) | x ∈ Dts} to Options, remove {y ∈ Dts | Sim (y, δ) <k} for some k from Dts By using Pt (acc (β, α, δ)) this strategy reacts to β's history of Propose and Reject utterances. Negotiation tactics are concerned with selecting some Options and wrapping them in argumentation. Prior interactions with agent β will have produced an intimacy pattern expressed in the form of (V * t α, V * t). Suppose that the rela β). Following from Equation 5, α will want to achieve a negotiation target, Nβ (Ψt) such that: ν · Nβ (Ψt) + (1 − ν) · Vβ * t is "a bit on the Tβ * t side of" V * t for small κ ∈ [0, ν] that represents α's desired rate of development for her relationship with β. Nβ (Ψt) is a 2 × 5 matrix containing variations in the LOGIC dimensions that α would like to reveal to β during Ψt (e.g. I'll pass a bit more information on options than usual, I'll be stronger in concessions on options, etc.). It is reasonable to expect β to progress towards her target at the same rate and Nα (Ψt) is calculated by replacing β by α in Equation 6. Nα (Ψt) is what α hopes to receive from β during Ψt. This gives a negotiation balance target of: Nα (Ψt) ~ Nβ (Ψt) that can be used as the foundation for reactive tactics by striving to maintain this balance across the LOGIC dimensions. A cautious tactic could use the balance to bound the response μ to each utterance μ' from β by the constraint: Vα (μ') ~ Vβ (μ) ≈ Stαβ ⊗ (Nα (Ψt) ~ Nβ (Ψt)), where ⊗ is element-by-element matrix multiplication, and Stαβ is the stance. A less neurotic tactic could attempt to achieve the target negotiation balance over the anticipated complete dialogue. If a balance bound requires negative information revelation in one LOGIC category then α will contribute nothing to it, and will leave this to the natural decay to the reputation D as described above.
The LOGIC Negotiation Model ABSTRACT Successful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC). We introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness). The intimacy is a pair of matrices that evaluate both an agent's contribution to the relationship and its opponent's contribution each from an information view and from a utilitarian view across the five LOGIC dimensions. The balance is the difference between these matrices. A relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future. The negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy. 1. INTRODUCTION In this paper we propose a new negotiation model to deal with long term relationships that are founded on successive negotiation encounters. The model is grounded on results from business and psychological studies [1, 16, 9], and acknowledges that negotiation is an information exchange pro cess as well as a utility exchange process [15, 14]. We believe that if agents are to succeed in real application domains they have to reconcile both views: informational and gametheoretical. Our aim is to model trading scenarios where agents represent their human principals, and thus we want their behaviour to be comprehensible by humans and to respect usual human negotiation procedures, whilst being consistent with, and somehow extending, game theoretical and information theoretical results. In this sense, agents are not just utility maximisers, but aim at building long lasting relationships with progressing levels of intimacy that determine what balance in information and resource sharing is acceptable to them. These two concepts, intimacy and balance are key in the model, and enable us to understand competitive and co-operative game theory as two particular theories of agent relationships (i.e. at different intimacy levels). These two theories are too specific and distinct to describe how a (business) relationship might grow because interactions have some aspects of these two extremes on a continuum in which, for example, agents reveal increasing amounts of private information as their intimacy grows. We don't follow the' Co-Opetition' aproach [4] where co-operation and competition depend on the issue under negotiation, but instead we belief that the willingness to co-operate/compete affect all aspects in the negotiation process. Negotiation strategies can naturally be seen as procedures that select tactics used to attain a successful deal and to reach a target intimacy level. It is common in human settings to use tactics that compensate for unbalances in one dimension of a negotiation with unbalances in another dimension. In this sense, humans aim at a general sense of fairness in an interaction. In Section 2 we outline the aspects of human negotiation modelling that we cover in this work. Then, in Section 3 we introduce the negotiation language. Section 4 explains in outline the architecture and the concepts of intimacy and balance, and how they influence the negotiation. Section 5 contains a description of the different metrics used in the agent model including intimacy. Finally, Section 6 outlines how strategies and tactics use the LOGIC framework, intimacy and balance. 2. HUMAN NEGOTIATION 2.1 Intimacy and Balance in relationships 3. COMMUNICATION MODEL 3.1 Ontology The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1031 Element A new trading partner my butcher my boss my partner my children 3.2 Language 1032 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. AGENT ARCHITECTURE 4.1 Updating the World Model Mt The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1033 4.3 Update of Pt (φ ~ Iφ) given ϕ 4.2 Update of Pt (ϕ ~ Iϕ) given ϕ 5. SUMMARY MEASURES 1034 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Confidence 5.2 Valuing negotiation dialogues The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1035 5.3 Intimacy and Balance 6. STRATEGIES AND TACTICS Each negotiation has to achieve two goals. First it may be intended to achieve some contractual outcome. Second it will aim to contribute to the growth, or decline, of the relationship intimacy. We now describe in greater detail the contents of the "Negotiation" box in Figure 1. The negotiation literature consistently advises that an agent's behaviour should not be predictable even in close, intimate relationships. The required variation of behaviour is normally described as varying the negotiation stance that informally varies from "friendly guy" to "tough guy". The stance is shown in Figure 1, it injects bounded random noise into the process, where the bound tightens as intimacy increases. The stance, Stαβ, is a 2 × 5 matrix of randomly chosen multipliers, each ≈ 1, that perturbs α's actions. The value in the (x, y) position in the matrix, where x = I, U and y = L, O, G, I, C, is chosen at random from [1 αβ, x, y) is the l (I ∗ t αβ, x, y), l (I * t αβ, x, y)] where l (I * t bound, and I * t αβ is the intimacy. The negotiation strategy is concerned with maintaining a working set of Options. If the set of options is empty then α will quit the negotiation. α perturbs the acceptance machinery (see Section 4) by deriving s from the Stαβ matrix such as the value at the (I, O) position. In line with the comment in Footnote 7, in the early stages of the negotiation α may decide to inflate her opening Options. This is achieved by increasing the value of s in Equation 1. The following strategy uses the machinery described in Section 4. Fix h, g, s and c, set the Options to the empty set, let Dts = {δ | Pt (acc (α, β, χ, δ)> c}, then: • repeat the following as many times as desired: add δ = arg maxx {Pt (acc (β, α, x)) | x ∈ Dts} to Options, remove {y ∈ Dts | Sim (y, δ) <k} for some k from Dts By using Pt (acc (β, α, δ)) this strategy reacts to β's history of Propose and Reject utterances. Negotiation tactics are concerned with selecting some Options and wrapping them in argumentation. Prior interactions with agent β will have produced an intimacy pattern expressed in the form of (V * t α, V * t). Suppose that the rela β). Following from Equation 5, α will want to achieve a negotiation target, Nβ (Ψt) such that: ν · Nβ (Ψt) + (1 − ν) · Vβ * t is "a bit on the Tβ * t side of" V * t for small κ ∈ [0, ν] that represents α's desired rate of development for her relationship with β. Nβ (Ψt) is a 2 × 5 matrix containing variations in the LOGIC dimensions that α would like to reveal to β during Ψt (e.g. I'll pass a bit more information on options than usual, I'll be stronger in concessions on options, etc.). It is reasonable to expect β to progress towards her target at the same rate and Nα (Ψt) is calculated by replacing β by α in Equation 6. Nα (Ψt) is what α hopes to receive from β during Ψt. This gives a negotiation balance target of: Nα (Ψt) ~ Nβ (Ψt) that can be used as the foundation for reactive tactics by striving to maintain this balance across the LOGIC dimensions. A cautious tactic could use the balance to bound the response μ to each utterance μ' from β by the constraint: Vα (μ') ~ Vβ (μ) ≈ Stαβ ⊗ (Nα (Ψt) ~ Nβ (Ψt)), where ⊗ is element-by-element matrix multiplication, and Stαβ is the stance. A less neurotic tactic could attempt to achieve the target negotiation balance over the anticipated complete dialogue. If a balance bound requires negative information revelation in one LOGIC category then α will contribute nothing to it, and will leave this to the natural decay to the reputation D as described above.
The LOGIC Negotiation Model ABSTRACT Successful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC). We introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness). The intimacy is a pair of matrices that evaluate both an agent's contribution to the relationship and its opponent's contribution each from an information view and from a utilitarian view across the five LOGIC dimensions. The balance is the difference between these matrices. A relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future. The negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy. 1. INTRODUCTION In this paper we propose a new negotiation model to deal with long term relationships that are founded on successive negotiation encounters. The model is grounded on results from business and psychological studies [1, 16, 9], and acknowledges that negotiation is an information exchange pro cess as well as a utility exchange process [15, 14]. Our aim is to model trading scenarios where agents represent their human principals, and thus we want their behaviour to be comprehensible by humans and to respect usual human negotiation procedures, whilst being consistent with, and somehow extending, game theoretical and information theoretical results. In this sense, agents are not just utility maximisers, but aim at building long lasting relationships with progressing levels of intimacy that determine what balance in information and resource sharing is acceptable to them. These two concepts, intimacy and balance are key in the model, and enable us to understand competitive and co-operative game theory as two particular theories of agent relationships (i.e. at different intimacy levels). These two theories are too specific and distinct to describe how a (business) relationship might grow because interactions have some aspects of these two extremes on a continuum in which, for example, agents reveal increasing amounts of private information as their intimacy grows. We don't follow the' Co-Opetition' aproach [4] where co-operation and competition depend on the issue under negotiation, but instead we belief that the willingness to co-operate/compete affect all aspects in the negotiation process. Negotiation strategies can naturally be seen as procedures that select tactics used to attain a successful deal and to reach a target intimacy level. It is common in human settings to use tactics that compensate for unbalances in one dimension of a negotiation with unbalances in another dimension. In this sense, humans aim at a general sense of fairness in an interaction. In Section 2 we outline the aspects of human negotiation modelling that we cover in this work. Then, in Section 3 we introduce the negotiation language. Section 4 explains in outline the architecture and the concepts of intimacy and balance, and how they influence the negotiation. Section 5 contains a description of the different metrics used in the agent model including intimacy. Finally, Section 6 outlines how strategies and tactics use the LOGIC framework, intimacy and balance. 6. STRATEGIES AND TACTICS Each negotiation has to achieve two goals. First it may be intended to achieve some contractual outcome. Second it will aim to contribute to the growth, or decline, of the relationship intimacy. We now describe in greater detail the contents of the "Negotiation" box in Figure 1. The negotiation literature consistently advises that an agent's behaviour should not be predictable even in close, intimate relationships. The required variation of behaviour is normally described as varying the negotiation stance that informally varies from "friendly guy" to "tough guy". The stance is shown in Figure 1, it injects bounded random noise into the process, where the bound tightens as intimacy increases. The stance, Stαβ, is a 2 × 5 matrix of randomly chosen multipliers, each ≈ 1, that perturbs α's actions. The negotiation strategy is concerned with maintaining a working set of Options. If the set of options is empty then α will quit the negotiation. α perturbs the acceptance machinery (see Section 4) by deriving s from the Stαβ matrix such as the value at the (I, O) position. In line with the comment in Footnote 7, in the early stages of the negotiation α may decide to inflate her opening Options. This is achieved by increasing the value of s in Equation 1. The following strategy uses the machinery described in Section 4. • repeat the following as many times as desired: add Negotiation tactics are concerned with selecting some Options and wrapping them in argumentation. Prior interactions with agent β will have produced an intimacy pattern expressed in the form of (V * t α, V * t). Suppose that the rela β). Following from Equation 5, α will want to achieve a negotiation target, Nβ (Ψt) such that: ν · Nβ (Ψt) + (1 − ν) · Vβ * t is "a bit on the Tβ * t side of" V * t for small κ ∈ [0, ν] that represents α's desired rate of development for her relationship with β. This gives a negotiation balance target of: Nα (Ψt) ~ Nβ (Ψt) that can be used as the foundation for reactive tactics by striving to maintain this balance across the LOGIC dimensions. A less neurotic tactic could attempt to achieve the target negotiation balance over the anticipated complete dialogue. If a balance bound requires negative information revelation in one LOGIC category then α will contribute nothing to it, and will leave this to the natural decay to the reputation D as described above.
H-37
Relaxed Online SVMs for Spam Filtering
Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs. Content-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam. The former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification. However, similar performance gains have yet to be demonstrated for online spam filtering. Additionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods. In this paper, we offer a resolution to this controversy. First, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets. Second, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost. Our results are experimentally verified on email spam, blog spam, and splog detection tasks.
[ "spam filter", "spam filter", "blog", "support vector machin", "bayesian method", "splog", "content-base filter", "link analysi", "machin learn techniqu", "link spam", "content-base spam detect", "increment updat", "logist regress", "hyperplan", "featur map" ]
[ "P", "P", "P", "P", "P", "P", "M", "U", "M", "M", "M", "U", "U", "U", "U" ]
Relaxed Online SVMs for Spam Filtering D. Sculley Tufts University Department of Computer Science 161 College Ave., Medford, MA USA dsculleycs.tufts.edu Gabriel M. Wachman Tufts University Department of Computer Science 161 College Ave., Medford, MA USA gwachm01cs.tufts.edu ABSTRACT Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs. Content-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam. The former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification. However, similar performance gains have yet to be demonstrated for online spam filtering. Additionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods. In this paper, we offer a resolution to this controversy. First, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets. Second, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost. Our results are experimentally verified on email spam, blog spam, and splog detection tasks. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - spam General Terms Measurement, Experimentation, Algorithms 1. INTRODUCTION Electronic communication is increasingly plagued by unwanted or harmful content known as spam. The most well known form of spam is email spam, which remains a major problem for large email systems. Other forms of spam are also becoming problematic, including blog spam, in which spammers post unwanted comments in blogs [21], and splogs, which are fake blogs constructed to enable link spam with the hope of boosting the measured importance of a given webpage in the eyes of automated search engines [17]. There are a variety of methods for identifying these many forms of spam, including compiling blacklists of known spammers, and conducting link analysis. The approach of content analysis has shown particular promise and generality for combating spam. In content analysis, the actual message text (often including hyper-text and meta-text, such as HTML and headers) is analyzed using machine learning techniques for text classification to determine if the given content is spam. Content analysis has been widely applied in detecting email spam [11], and has also been used for identifying blog spam [21] and splogs [17]. In this paper, we do not explore the related problem of link spam, which is currently best combated by link analysis [13]. 1.1 An Anti-Spam Controversy The anti-spam community has been divided on the choice of the best machine learning method for content-based spam detection. Academic researchers have tended to favor the use of Support Vector Machines (SVMs), a statistically robust machine learning method [7] which yields state-of-theart performance on general text classification [14]. However, SVMs typically require training time that is quadratic in the number of training examples, and are impractical for largescale email systems. Practitioners requiring content-based spam filtering have typically chosen to use the faster (if less statistically robust) machine learning method of Naive Bayes text classification [11, 12, 20]. This Bayesian method requires only linear training time, and is easily implemented in an online setting with incremental updates. This allows a deployed system to easily adapt to a changing environment over time. Other fast methods for spam filtering include compression models [1] and logistic regression [10]. It has not yet been empirically demonstrated that SVMs give improved performance over these methods in an online spam detection setting [4]. 1.2 Contributions In this paper, we address the anti-spam controversy and offer a potential resolution. We first demonstrate that online SVMs do indeed provide state-of-the-art spam detection through empirical tests on several large benchmark data sets of email spam. We then analyze the effect of the tradeoff parameter in the SVM objective function, which shows that the expensive SVM methodology may, in fact, be overkill for spam detection. We reduce the computational cost of SVM learning by relaxing this requirement on the maximum margin in online settings, and create a Relaxed Online SVM, ROSVM, appropriate for high performance content-based spam filtering in large-scale settings. 2. SPAM AND ONLINE SVMS The controversy between academics and practitioners in spam filtering centers on the use of SVMs. The former advocate their use, but have yet to demonstrate strong performance with SVMs on online spam filtering. Indeed, the results of [4] show that, when used with default parameters, SVMs actually perform worse than other methods. In this section, we review the basic workings of SVMs and describe a simple Online SVM algorithm. We then show that Online SVMs indeed achieve state-of-the-art performance on filtering email spam, blog comment spam, and splogs, so long as the tradeoff parameter C is set to a high value. However, the cost of Online SVMs turns out to be prohibitive for largescale applications. These findings motivate our proposal of Relaxed Online SVMs in the following section. 2.1 Background: SVMs SVMs are a robust machine learning methodology which has been shown to yield state-of-the-art performance on text classification [14]. by finding a hyperplane that separates two classes of data in data space while maximizing the margin between them. We use the following notation to describe SVMs, which draws from [23]. A data set X contains n labeled example vectors {(x1, y1) ... (xn, yn)}, where each xi is a vector containing features describing example i, and each yi is the class label for that example. In spam detection, the classes spam and ham (i.e., not spam) are assigned the numerical class labels +1 and −1, respectively. The linear SVMs we employ in this paper use a hypothesis vector w and bias term b to classify a new example x, by generating a predicted class label f(x): f(x) = sign(< w, x > +b) SVMs find the hypothesis w, which defines the separating hyperplane, by minimizing the following objective function over all n training examples: τ(w, ξ) = 1 2 ||w||2 + C nX i=i ξi under the constraints that ∀i = {1. . n} : yi(< w, xi > +b) ≥ 1 − ξi, ξi ≥ 0 In this objective function, each slack variable ξi shows the amount of error that the classifier makes on a given example xi. Minimizing the sum of the slack variables corresponds to minimizing the loss function on the training data, while minimizing the term 1 2 ||w||2 corresponds to maximizing the margin between the two classes [23]. These two optimization goals are often in conflict; the tradeoff parameter C determines how much importance to give each of these tasks. Linear SVMs exploit data sparsity to classify a new instance in O(s) time, where s is the number of non-zero features. This is the same classification time as other linear Given: data set X = (x1, y1), ... , (xn, yn), C, m: Initialize w := 0, b := 0, seenData := { } For Each xi ∈ X do: Classify xi using f(xi) = sign(< w, xi > +b) IF yif(xi) < 1 Find w , b using SMO on seenData, using w, b as seed hypothesis. Add xi to seenData done Figure 1: Pseudo code for Online SVM. classifiers, and as Naive Bayesian classification. Training SVMs, however, typically takes O(n2 ) time, for n training examples. A variant for linear SVMs was recently proposed which trains in O(ns) time [15], but because this method has a high constant, we do not explore it here. 2.2 Online SVMs In many traditional machine learning applications, SVMs are applied in batch mode. That is, an SVM is trained on an entire set of training data, and is then tested on a separate set of testing data. Spam filtering is typically tested and deployed in an online setting, which proceeds incrementally. Here, the learner classifies a new example, is told if its prediction is correct, updates its hypothesis accordingly, and then awaits a new example. Online learning allows a deployed system to adapt itself in a changing environment. Re-training an SVM from scratch on the entire set of previously seen data for each new example is cost prohibitive. However, using an old hypothesis as the starting point for re-training reduces this cost considerably. One method of incremental and decremental SVM learning was proposed in [2]. Because we are only concerned with incremental learning, we apply a simpler algorithm for converting a batch SVM learner into an online SVM (see Figure 1 for pseudocode), which is similar to the approach of [16]. Each time the Online SVM encounters an example that was poorly classified, it retrains using the old hypothesis as a starting point. Note that due to the Karush-Kuhn-Tucker (KKT) conditions, it is not necessary to re-train on wellclassified examples that are outside the margins [23]. We used Platt``s SMO algorithm [22] as a core SVM solver, because it is an iterative method that is well suited to converge quickly from a good initial hypothesis. Because previous work (and our own initial testing) indicates that binary feature values give the best results for spam filtering [20, 9], we optimized our implementation of the Online SMO to exploit fast inner-products with binary vectors. 1 2.3 Feature Mapping Spam Content Extracting machine learning features from text may be done in a variety of ways, especially when that text may include hyper-content and meta-content such as HTML and header information. However, previous research has shown that simple methods from text classification, such as bag of words vectors, and overlapping character-level n-grams, can achieve strong results [9]. Formally, a bag of words vector is a vector x with a unique dimension for each possible 1 Our source code is freely available at www.cs.tufts.edu/∼dsculley/onlineSMO. 1 0.999 0.995 0.99 0.985 0.98 0.1 1 10 100 1000 ROCArea C 2-grams 3-grams 4-grams words Figure 2: Tuning the Tradeoff Parameter C. Tests were conducted with Online SMO, using binary feature vectors, on the spamassassin data set of 6034 examples. Graph plots C versus Area under the ROC curve. word, defined as a contiguous substring of non-whitespace characters. An n-gram vector is a vector x with a unique dimension for each possible substring of n total characters. Note that n-grams may include whitespace, and are overlapping. We use binary feature scoring, which has been shown to be most effective for a variety of spam detection methods [20, 9]. We normalize the vectors with the Euclidean norm. Furthermore, with email data, we reduce the impact of long messages (for example, with attachments) by considering only the first 3,000 characters of each string. For blog comments and splogs, we consider the whole text, including any meta-data such as HTML tags, as given. No other feature selection or domain knowledge was used. 2.4 Tuning the Tradeoff Parameter, C The SVM tradeoff parameter C must be tuned to balance the (potentially conflicting) goals of maximizing the margin and minimizing the training error. Early work on SVM based spam detection [9] showed that high values of C give best performance with binary features. Later work has not always followed this lead: a (low) default setting of C was used on splog detection [17], and also on email spam [4]. Following standard machine learning practice, we tuned C on separate tuning data not used for later testing. We used the publicly available spamassassin email spam data set, and created an online learning task by randomly interleaving all 6034 labeled messages to create a single ordered set. For tuning, we performed a coarse parameter search for C using powers of ten from .0001 to 10000. We used the Online SVM described above, and tested both binary bag of words vectors and n-gram vectors with n = {2, 3, 4}. We used the first 3000 characters of each message, which included header information, body of the email, and possibly attachments. Following the recommendation of [6], we use Area under the ROC curve as our evaluation measure. The results (see Figure 2) agree with [9]: there is a plateau of high performance achieved with all values of C ≥ 10, and performance degrades sharply with C < 1. For the remainder of our experiments with SVMs in this paper, we set C = 100. We will return to the observation that very high values of C do not degrade performance as support for the intuition that relaxed SVMs should perform well on spam. Table 1: Results for Email Spam filtering with Online SVM on benchmark data sets. Score reported is (1-ROCA)%, where 0 is optimal. trec05p-1 trec06p OnSVM: words 0.015 (.011-.022) 0.034 (.025-.046) 3-grams 0.011 (.009-.015) 0.025 (.017-.035) 4-grams 0.008 (.007-.011) 0.023 (.017-.032) SpamProbe 0.059 (.049-.071) 0.092 (.078-.110) BogoFilter 0.048 (.038-.062) 0.077 (.056-.105) TREC Winners 0.019 (.015-.023) 0.054 (.034-.085) 53-Ensemble 0.007 (.005-.008) 0.020 (.007-.050) Table 2: Results for Blog Comment Spam Detection using SVMs and Leave One Out Cross Validation. We report the same performance measures as in the prior work for meaningful comparison. accuracy precision recall SVM C = 100: words 0.931 0.946 0.954 3-grams 0.951 0.963 0.965 4-grams 0.949 0.967 0.956 Prior best method 0.83 0.874 0.874 2.5 Email Spam and Online SVMs With C tuned on a separate tuning set, we then tested the performance of Online SVMs in spam detection. We used two large benchmark data sets of email spam as our test corpora. These data sets are the 2005 TREC public data set trec05p-1 of 92,189 messages, and the 2006 TREC public data sets, trec06p, containing 37,822 messages in English. (We do not report our strong results on the trec06c corpus of Chinese messages as there have been questions raised over the validity of this test set.) We used the canonical ordering provided with each of these data sets for fair comparison. Results for these experiments, with bag of words vectors and and n-gram vectors appear in Table 1. To compare our results with previous scores on these data sets, we use the same (1-ROCA)% measure described in [6], which is one minus the area under the ROC curve, expressed as a percent. This measure shows the percent chance of error made by a classifier asserting that one message is more likely to be spam than another. These results show that Online SVMs do give state of the art performance on email spam. The only known system that out-performs the Online SVMs on the trec05p-1 data set is a recent ensemble classifier which combines the results of 53 unique spam filters [19]. To our knowledge, the Online SVM has out-performed every other single filter on these data sets, including those using Bayesian methods [5, 3], compression models [5, 3], logistic regression [10], and perceptron variants [3], the TREC competition winners [5, 3], and open source email spam filters BogoFilter v1.1.5 and SpamProbe v1.4d. 2.6 Blog Comment Spam and SVMs Blog comment spam is similar to email spam in many regards, and content-based methods have been proposed for detecting these spam comments [21]. However, large benchmark data sets of labeled blog comment spam do not yet exist. Thus, we run experiments on the only publicly available data set we know of, which was used in content-based blog Table 3: Results for Splog vs. Blog Detection using SVMs and Leave One Out Cross Validation. We report the same evaluation measures as in the prior work for meaningful comparison. features precision recall F1 SVM C = 100: words 0.921 0.870 0.895 3-grams 0.904 0.866 0.885 4-grams 0.928 0.876 0.901 Prior SVM with: words 0.887 0.864 0.875 4-grams 0.867 0.844 0.855 words+urls 0.893 0.869 0.881 comment spam detection experiments by [21]. Because of the small size of the data set, and because prior researchers did not conduct their experiments in an on-line setting, we test the performance of linear SVMs using leave-one-out cross validation, with SVM-Light, a standard open-source SVM implementation [14]. We use the parameter setting C = 100, with the same feature space mappings as above. We report accuracy, precision, and recall to compare these to the results given on the same data set by [21]. These results (see Table 2) show that SVMs give superior performance on this data set to the prior methodology. 2.7 Splogs and SVMs As with blog comment spam, there is not yet a large, publicly available benchmark corpus of labeled splog detection test data. However, the authors of [17] kindly provided us with the labeled data set of 1,389 blogs and splogs that they used to test content-based splog detection using SVMs. The only difference between our methodology and that of [17] is that they used default parameters for C, which SVM-Light sets to 1 avg||x||2 . (For normalized vectors, this default value sets C = 1.) They also tested several domain-informed feature mappings, such as giving special features to url tags. For our experiments, we used the same feature mappings as above, and tested the effect of setting C = 100. As with the methodology of [17], we performed leave one out cross validation for apples-to-apples comparison on this data. The results (see Table 3) show that a high value of C produces higher performance for the same feature space mappings, and even enables the simple 4-gram mapping to out-perform the previous best mapping which incorporated domain knowledge by using words and urls. 2.8 Computational Cost The results presented in this section demonstrate that linfeatures trec06p trec05p-1 words 12196s 66478s 3-grams 44605s 128924s 4-grams 87519s 242160s corpus size 32822 92189 Table 4: Execution time for Online SVMs with email spam detection, in CPU seconds. These times do not include the time spent mapping strings to feature vectors. The number of examples in each data set is given in the last row as corpus size. A B Figure 3: Visualizing the effect of C. Hyperplane A maximizes the margin while accepting a small amount of training error. This corresponds to setting C to a low value. Hyperplane B accepts a smaller margin in order to reduce training error. This corresponds to setting C to a high value. Content-based spam filtering appears to do best with high values of C. ear SVMs give state of the art performance on content-based spam filtering. However, this performance comes at a price. Although the blog comment spam and splog data sets are too small for the quadratic training time of SVMs to appear problematic, the email data sets are large enough to illustrate the problems of quadratic training cost. Table 4 shows computation time versus data set size for each of the online learning tasks (on same system). The training cost of SVMs are prohibitive for large-scale content based spam detection, or a large blog host. In the following section, we reduce this cost by relaxing the expensive requirements of SVMs. 3. RELAXED ONLINE SVMS (ROSVM) One of the main benefits of SVMs is that they find a decision hyperplane that maximizes the margin between classes in the data space. Maximizing the margin is expensive, typically requiring quadratic training time in the number of training examples. However, as we saw in the previous section, the task of content-based spam detection is best achieved by SVMs with a high value of C. Setting C to a high value for this domain implies that minimizing training loss is more important than maximizing the margin (see Figure 3). Thus, while SVMs do create high performance spam filters, applying them in practice is overkill. The full margin maximization feature that they provide is unnecessary, and relaxing this requirement can reduce computational cost. We propose three ways to relax Online SVMs: • Reduce the size of the optimization problem by only optimizing over the last p examples. • Reduce the number of training updates by only training on actual errors. • Reduce the number of iterations in the iterative SVM Given: dataset X = (x1, y1), ... , (xn, yn), C, m, p: Initialize w := 0, b := 0, seenData := { } For Each xi ∈ X do: Classify xi using f(xi) = sign(< w, xi > +b) If yif(xi) < m Find w , b with SMO on seenData, using w, b as seed hypothesis. set (w, b) := (w'',b'') If size(seenData) > p remove oldest example from seenData Add xi to seenData done Figure 4: Pseudo-code for Relaxed Online SVM. solver by allowing an approximate solution to the optimization problem. As we describe in the remainder of this subsection, all of these methods trade statistical robustness for reduced computational cost. Experimental results reported in the following section show that they equal or approach the performance of full Online SVMs on content-based spam detection. 3.1 Reducing Problem Size In the full Online SVMs, we re-optimize over the full set of seen data on every update, which becomes expensive as the number of seen data points grows. We can bound this expense by only considering the p most recent examples for optimization (see Figure 4 for pseudo-code). Note that this is not equivalent to training a new SVM classifier from scratch on the p most recent examples, because each successive optimization problem is seeded with the previous hypothesis w [8]. This hypothesis may contain values for features that do not occur anywhere in the p most recent examples, and these will not be changed. This allows the hypothesis to remember rare (but informative) features that were learned further than p examples in the past. Formally, the optimization problem is now defined most clearly in the dual form [23]. In this case, the original softmargin SVM is computed by maximizing at example n: W (α) = nX i=1 αi − 1 2 nX i,j=1 αiαjyiyj < xi, xj >, subject to the previous constraints [23]: ∀i ∈ {1, ... , n} : 0 ≤ αi ≤ C and nX i=1 αiyi = 0 To this, we add the additional lookback buffer constraint ∀j ∈ {1, ... , (n − p)} : αj = cj where cj is a constant, fixed as the last value found for αj while j > (n − p). Thus, the margin found by an optimization is not guaranteed to be one that maximizes the margin for the global data set of examples {x1, ... , xn)}, but rather one that satisfies a relaxed requirement that the margin be maximized over the examples { x(n−p+1), ... , xn}, subject to the fixed constraints on the hyperplane that were found in previous optimizations over examples {x1, ... , x(n−p)}. (For completeness, when p ≥ n, define (n − p) = 1.) This set of constraints reduces the number of free variables in the optimization problem, reducing computational cost. 3.2 Reducing Number of Updates As noted before, the KKT conditions show that a well classified example will not change the hypothesis; thus it is not necessary to re-train when we encounter such an example. Under the KKT conditions, an example xi is considered well-classified when yif(xi) > 1. If we re-train on every example that is not well-classified, our hyperplane will be guaranteed to be optimal at every step. The number of re-training updates can be reduced by relaxing the definition of well classified. An example xi is now considered well classified when yif(xi) > M, for some 0 ≤ M ≤ 1. Here, each update still produces an optimal hyperplane. The learner may encounter an example that lies within the margins, but farther from the margins than M. Such an example means the hypothesis is no longer globally optimal for the data set, but it is considered good enough for continued use without immediate retraining. This update procedure is similar to that used by variants of the Perceptron algorithm [18]. In the extreme case, we can set M = 0, which creates a mistake driven Online SVM. In the experimental section, we show that this version of Online SVMs, which updates only on actual errors, does not significantly degrade performance on content-based spam detection, but does significantly reduce cost. 3.3 Reducing Iterations As an iterative solver, SMO makes repeated passes over the data set to optimize the objective function. SMO has one main loop, which can alternate between passing over the entire data set, or the smaller active set of current support vectors [22]. Successive iterations of this loop bring the hyperplane closer to an optimal value. However, it is possible that these iterations provide less benefit than their expense justifies. That is, a close first approximation may be good enough. We introduce a parameter T to control the maximum number of iterations we allow. As we will see in the experimental section, this parameter can be set as low as 1 with little impact on the quality of results, providing computational savings. 4. EXPERIMENTS In Section 2, we argued that the strong performance on content-based spam detection with SVMs with a high value of C show that the maximum margin criteria is overkill, incurring unnecessary computational cost. In Section 3, we proposed ROSVM to address this issue, as both of these methods trade away guarantees on the maximum margin hyperplane in return for reduced computational cost. In this section, we test these methods on the same benchmark data sets to see if state of the art performance may be achieved by these less costly methods. We find that ROSVM is capable of achieving these high levels of performance with greatly reduced cost. Our main tests on content-based spam detection are performed on large benchmark sets of email data. We then apply these methods on the smaller data sets of blog comment spam and blogs, with similar performance. 4.1 ROSVM Tests In Section 3, we proposed three approaches for reducing the computational cost of Online SMO: reducing the prob0.005 0.01 0.025 0.05 0.1 10 100 1000 10000 100000 (1-ROCA)% Buffer Size trec05p-1 trec06p 0 50000 100000 150000 200000 250000 10 100 1000 10000 100000 CPUSec. Buffer Size trec05p-1 trec06p Figure 5: Reduced Size Tests. lem size, reducing the number of optimization iterations, and reducing the number of training updates. Each of these approaches relax the maximum margin criteria on the global set of previously seen data. Here we test the effect that each of these methods has on both effectiveness and efficiency. In each of these tests, we use the large benchmark email data sets, trec05p-1 and trec06p. 4.1.1 Testing Reduced Size For our first ROSVM test, we experiment on the effect of reducing the size of the optimization problem by only considering the p most recent examples, as described in the previous section. For this test, we use the same 4-gram mappings as for the reference experiments in Section 2, with the same value C = 100. We test a range of values p in a coarse grid search. Figure 5 reports the effect of the buffer size p in relationship to the (1-ROCA)% performance measure (top), and the number of CPU seconds required (bottom). The results show that values of p < 100 do result in degraded performance, although they evaluate very quickly. However, p values from 500 to 10,000 perform almost as well as the original Online SMO (represented here as p = 100, 000), at dramatically reduced computational cost. These results are important for making state of the art performance on large-scale content-based spam detection practical with online SVMs. Ordinarily, the training time would grow quadratically with the number of seen examples. However, fixing a value of p ensures that the training time is independent of the size of the data set. Furthermore, a lookback buffer allows the filter to adjust to concept drift. 0.005 0.01 0.025 0.05 0.1 10521 (1-ROCA)% Max Iters. trec06p trec05p-1 50000 100000 150000 200000 250000 10521 CPUSec. Max Iters. trec06p trec05p-1 Figure 6: Reduced Iterations Tests. 4.1.2 Testing Reduced Iterations In the second ROSVM test, we experiment with reducing the number of iterations. Our initial tests showed that the maximum number of iterations used by Online SMO was rarely much larger than 10 on content-based spam detection; thus we tested values of T = {1, 2, 5, ∞}. Other parameters were identical to the original Online SVM tests. The results on this test were surprisingly stable (see Figure 6). Reducing the maximum number of SMO iterations per update had essentially no impact on classification performance, but did result in a moderate increase in speed. This suggests that any additional iterations are spent attempting to find improvements to a hyperplane that is already very close to optimal. These results show that for content-based spam detection, we can reduce computational cost by allowing only a single SMO iteration (that is, T = 1) with effectively equivalent performance. 4.1.3 Testing Reduced Updates For our third ROSVM experiment, we evaluate the impact of adjusting the parameter M to reduce the total number of updates. As noted before, when M = 1, the hyperplane is globally optimal at every step. Reducing M allows a slightly inconsistent hyperplane to persist until it encounters an example for which it is too inconsistent. We tested values of M from 0 to 1, at increments of 0.1. (Note that we used p = 10000 to decrease the cost of evaluating these tests.) The results for these tests are appear in Figure 7, and show that there is a slight degradation in performance with reduced values of M, and that this degradation in performance is accompanied by an increase in efficiency. Values of 0.005 0.01 0.025 0.05 0.1 0 0.2 0.4 0.6 0.8 1 (1-ROCA)% M trec05p-1 trec06p 5000 10000 15000 20000 25000 30000 35000 40000 0 0.2 0.4 0.6 0.8 1 CPUSec. M trec05p-1 trec06p Figure 7: Reduced Updates Tests. M > 0.7 give effectively equivalent performance as M = 1, and still reduce cost. 4.2 Online SVMs and ROSVM We now compare ROSVM against Online SVMs on the email spam, blog comment spam, and splog detection tasks. These experiments show comparable performance on these tasks, at radically different costs. In the previous section, the effect of the different relaxation methods was tested separately. Here, we tested these methods together to create a full implementation of ROSVM. We chose the values p = 10000, T = 1, M = 0.8 for the email spam detection tasks. Note that these parameter values were selected as those allowing ROSVM to achieve comparable performance results with Online SVMs, in order to test total difference in computational cost. The splog and blog data sets were much smaller, so we set p = 100 for these tasks to allow meaningful comparisons between the reduced size and full size optimization problems. Because these values were not hand-tuned, both generalization performance and runtime results are meaningful in these experiments. 4.2.1 Experimental Setup We compared Online SVMs and ROSVM on email spam, blog comment spam, and splog detection. For the email spam, we used the two large benchmark corpora, trec05p-1 and trec06p, in the standard online ordering. We randomly ordered both the blog comment spam corpus and the splog corpus to create online learning tasks. Note that this is a different setting than the leave-one-out cross validation task presented on these corpora in Section 2 - the results are not directly comparable. However, this experimental design Table 5: Email Spam Benchmark Data. These results compare Online SVM and ROSVM on email spam detection, using binary 4-gram feature space. Score reported is (1-ROCA)%, where 0 is optimal. trec05p-1 trec05p-1 trec06p trec06p (1-ROC)% CPUs (1-ROC)% CPUs OnSVM 0.0084 242,160 0.0232 87,519 ROSVM 0.0090 24,720 0.0240 18,541 Table 6: Blog Comment Spam. These results comparing Online SVM and ROSVM on blog comment spam detection using binary 4-gram feature space. Acc. Prec. Recall F1 CPUs OnSVM 0.926 0.930 0.962 0.946 139 ROSVM 0.923 0.925 0.965 0.945 11 does allow meaningful comparison between our two online methods on these content-based spam detection tasks. We ran each method on each task, and report the results in Tables 5, 6, and 7. Note that the CPU time reported for each method was generated on the same computing system. This time reflects only the time needed to complete online learning on tokenized data. We do not report the time taken to tokenize the data into binary 4-grams, as this is the same additive constant for all methods on each task. In all cases, ROSVM was significantly less expensive computationally. 4.3 Discussion The comparison results shown in Tables 5, 6, and 7 are striking in two ways. First, they show that the performance of Online SVMs can be matched and even exceeded by relaxed margin methods. Second, they show a dramatic disparity in computational cost. ROSVM is an order of magnitude more efficient than the normal Online SVM, and gives comparable results. Furthermore, the fixed lookback buffer ensures that the cost of each update does not depend on the size of the data set already seen, unlike Online SVMs. Note the blog and splog data sets are relatively small, and results on these data sets must be considered preliminary. Overall, these results show that there is no need to pay the high cost of SVMs to achieve this level of performance on contentbased detection of spam. ROSVMs offer a far cheaper alternative with little or no performance loss. 5. CONCLUSIONS In the past, academic researchers and industrial practitioners have disagreed on the best method for online contentbased detection of spam on the web. We have presented one resolution to this debate. Online SVMs do, indeed, proTable 7: Splog Data Set. These results compare Online SVM and ROSVM on splog detection using binary 4-gram feature space. Acc. Prec. Recall F1 CPUs OnSVM 0.880 0.910 0.842 0.874 29353 ROSVM 0.878 0.902 0.849 0.875 1251 duce state-of-the-art performance on this task with proper adjustment of the tradeoff parameter C, but with cost that grows quadratically with the size of the data set. The high values of C required for best performance with SVMs show that the margin maximization of Online SVMs is overkill for this task. Thus, we have proposed a less expensive alternative, ROSVM, that relaxes this maximum margin requirement, and produces nearly equivalent results. These methods are efficient enough for large-scale filtering of contentbased spam in its many forms. It is natural to ask why the task of content-based spam detection gets strong performance from ROSVM. After all, not all data allows the relaxation of SVM requirements. We conjecture that email spam, blog comment spam, and splogs all share the characteristic that a subset of features are particularly indicative of content being either spam or not spam. These indicative features may be sparsely represented in the data set, because of spam methods such as word obfuscation, in which common spam words are intentionally misspelled in an attempt to reduce the effectiveness of word-based spam detection. Maximizing the margin may cause these sparsely represented features to be ignored, creating an overall reduction in performance. It appears that spam data is highly separable, allowing ROSVM to be successful with high values of C and little effort given to maximizing the margin. Future work will determine how applicable relaxed SVMs are to the general problem of text classification. Finally, we note that the success of relaxed SVM methods for content-based spam detection is a result that depends on the nature of spam data, which is potentially subject to change. Although it is currently true that ham and spam are linearly separable given an appropriate feature space, this assumption may be subject to attack. While our current methods appear robust against primitive attacks along these lines, such as the good word attack [24], we must explore the feasibility of more sophisticated attacks. 6. REFERENCES [1] A. Bratko and B. Filipic. Spam filtering using compression models. Technical Report IJS-DP-9227, Department of Intelligent Systems, Jozef Stefan Institute, L jubljana, Slovenia, 2005. [2] G. Cauwenberghs and T. Poggio. Incremental and decremental support vector machine learning. In NIPS, pages 409-415, 2000. [3] G. V. Cormack. TREC 2006 spam track overview. In To appear in: The Fifteenth Text REtrieval Conference (TREC 2006) Proceedings, 2006. [4] G. V. Cormack and A. Bratko. Batch and on-line spam filter comparison. In Proceedings of the Third Conference on Email and Anti-Spam (CEAS), 2006. [5] G. V. Cormack and T. R. Lynam. TREC 2005 spam track overview. In The Fourteenth Text REtrieval Conference (TREC 2005) Proceedings, 2005. [6] G. V. Cormack and T. R. Lynam. On-line supervised spam filter evaluation. Technical report, David R. Cheriton School of Computer Science, University of Waterloo, Canada, February 2006. [7] N. Cristianini and J. Shawe-Taylor. An introduction to support vector machines. Cambridge University Press, 2000. [8] D. DeCoste and K. Wagstaff. Alpha seeding for support vector machines. In KDD ``00: Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 345-349, 2000. [9] H. Drucker, V. Vapnik, and D. Wu. Support vector machines for spam categorization. IEEE Transactions on Neural Networks, 10(5):1048-1054, 1999. [10] J. Goodman and W. Yin. Online discriminative spam filter training. In Proceedings of the Third Conference on Email and Anti-Spam (CEAS), 2006. [11] P. Graham. A plan for spam. 2002. [12] P. Graham. Better bayesian filtering. 2003. [13] Z. Gyongi and H. Garcia-Molina. Spam: It``s not just for inboxes anymore. Computer, 38(10):28-34, 2005. [14] T. Joachims. Text categorization with suport vector machines: Learning with many relevant features. In ECML ``98: Proceedings of the 10th European Conference on Machine Learning, pages 137-142, 1998. [15] T. Joachims. Training linear svms in linear time. In KDD ``06: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 217-226, 2006. [16] J. Kivinen, A. Smola, and R. Williamson. Online learning with kernels. In Advances in Neural Information Processing Systems 14, pages 785-793. MIT Press, 2002. [17] P. Kolari, T. Finin, and A. Joshi. SVMs for the blogosphere: Blog identification and splog detection. AAAI Spring Symposium on Computational Approaches to Analyzing Weblogs, 2006. [18] W. Krauth and M. M´ezard. Learning algorithms with optimal stability in neural networks. Journal of Physics A, 20(11):745-752, 1987. [19] T. Lynam, G. Cormack, and D. Cheriton. On-line spam filter fusion. In SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 123-130, 2006. [20] V. Metsis, I. Androutsopoulos, and G. Paliouras. Spam filtering with naive bayes - which naive bayes? Third Conference on Email and Anti-Spam (CEAS), 2006. [21] G. Mishne, D. Carmel, and R. Lempel. Blocking blog spam with language model disagreement. Proceedings of the 1st International Workshop on Adversarial Information Retrieval on the Web (AIRWeb), May 2005. [22] J. Platt. Sequenital minimal optimization: A fast algorithm for training support vector machines. In B. Scholkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press, 1998. [23] B. Scholkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001. [24] G. L. Wittel and S. F. Wu. On attacking statistical spam filters. CEAS: First Conference on Email and Anti-Spam, 2004.
Relaxed Online SVMs for Spam Filtering ABSTRACT Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs. Content-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam. The former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification. However, similar performance gains have yet to be demonstrated for online spam filtering. Additionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods. In this paper, we offer a resolution to this controversy. First, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets. Second, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost. Our results are experimentally verified on email spam, blog spam, and splog detection tasks. 1. INTRODUCTION Electronic communication is increasingly plagued by unwanted or harmful content known as spam. The most well known form of spam is email spam, which remains a major problem for large email systems. Other forms of spam are also becoming problematic, including blog spam, in which spammers post unwanted comments in blogs [21], and splogs, which are fake blogs constructed to enable link spam with the hope of boosting the measured importance of a given webpage in the eyes of automated search engines [17]. There are a variety of methods for identifying these many forms of spam, including compiling blacklists of known spammers, and conducting link analysis. The approach of content analysis has shown particular promise and generality for combating spam. In content analysis, the actual message text (often including hyper-text and meta-text, such as HTML and headers) is analyzed using machine learning techniques for text classification to determine if the given content is spam. Content analysis has been widely applied in detecting email spam [11], and has also been used for identifying blog spam [21] and splogs [17]. In this paper, we do not explore the related problem of link spam, which is currently best combated by link analysis [13]. 1.1 An Anti-Spam Controversy The anti-spam community has been divided on the choice of the best machine learning method for content-based spam detection. Academic researchers have tended to favor the use of Support Vector Machines (SVMs), a statistically robust machine learning method [7] which yields state-of-theart performance on general text classification [14]. However, SVMs typically require training time that is quadratic in the number of training examples, and are impractical for largescale email systems. Practitioners requiring content-based spam filtering have typically chosen to use the faster (if less statistically robust) machine learning method of Naive Bayes text classification [11, 12, 20]. This Bayesian method requires only linear training time, and is easily implemented in an online setting with incremental updates. This allows a deployed system to easily adapt to a changing environment over time. Other fast methods for spam filtering include compression models [1] and logistic regression [10]. It has not yet been empirically demonstrated that SVMs give improved performance over these methods in an online spam detection setting [4]. 1.2 Contributions In this paper, we address the anti-spam controversy and offer a potential resolution. We first demonstrate that online SVMs do indeed provide state-of-the-art spam detection through empirical tests on several large benchmark data sets of email spam. We then analyze the effect of the tradeoff parameter in the SVM objective function, which shows that the expensive SVM methodology may, in fact, be overkill for spam detection. We reduce the computational cost of SVM learning by relaxing this requirement on the maximum margin in online settings, and create a Relaxed Online SVM, ROSVM, appropriate for high performance content-based spam filtering in large-scale settings. 2. SPAM AND ONLINE SVMS The controversy between academics and practitioners in spam filtering centers on the use of SVMs. The former advocate their use, but have yet to demonstrate strong performance with SVMs on online spam filtering. Indeed, the results of [4] show that, when used with default parameters, SVMs actually perform worse than other methods. In this section, we review the basic workings of SVMs and describe a simple Online SVM algorithm. We then show that Online SVMs indeed achieve state-of-the-art performance on filtering email spam, blog comment spam, and splogs, so long as the tradeoff parameter C is set to a high value. However, the cost of Online SVMs turns out to be prohibitive for largescale applications. These findings motivate our proposal of Relaxed Online SVMs in the following section. 2.1 Background: SVMs SVMs are a robust machine learning methodology which has been shown to yield state-of-the-art performance on text classification [14]. by finding a hyperplane that separates two classes of data in data space while maximizing the margin between them. We use the following notation to describe SVMs, which draws from [23]. A data set X contains n labeled example vectors {(x1, y1)... (xn, yn)}, where each xi is a vector containing features describing example i, and each yi is the class label for that example. In spam detection, the classes spam and ham (i.e., not spam) are assigned the numerical class labels +1 and − 1, respectively. The linear SVMs we employ in this paper use a hypothesis vector w and bias term b to classify a new example x, by generating a predicted class label f (x): SVMs find the hypothesis w, which defines the separating hyperplane, by minimizing the following objective function over all n training examples: In this objective function, each slack variable ξi shows the amount of error that the classifier makes on a given example xi. Minimizing the sum of the slack variables corresponds to minimizing the loss function on the training data, while minimizing the term 21 | | w | | 2 corresponds to maximizing the margin between the two classes [23]. These two optimization goals are often in conflict; the tradeoff parameter C determines how much importance to give each of these tasks. Linear SVMs exploit data sparsity to classify a new instance in O (s) time, where s is the number of non-zero features. This is the same classification time as other linear Figure 1: Pseudo code for Online SVM. classifiers, and as Naive Bayesian classification. Training SVMs, however, typically takes O (n2) time, for n training examples. A variant for linear SVMs was recently proposed which trains in O (ns) time [15], but because this method has a high constant, we do not explore it here. 2.2 Online SVMs In many traditional machine learning applications, SVMs are applied in batch mode. That is, an SVM is trained on an entire set of training data, and is then tested on a separate set of testing data. Spam filtering is typically tested and deployed in an online setting, which proceeds incrementally. Here, the learner classifies a new example, is told if its prediction is correct, updates its hypothesis accordingly, and then awaits a new example. Online learning allows a deployed system to adapt itself in a changing environment. Re-training an SVM from scratch on the entire set of previously seen data for each new example is cost prohibitive. However, using an old hypothesis as the starting point for re-training reduces this cost considerably. One method of incremental and decremental SVM learning was proposed in [2]. Because we are only concerned with incremental learning, we apply a simpler algorithm for converting a batch SVM learner into an online SVM (see Figure 1 for pseudocode), which is similar to the approach of [16]. Each time the Online SVM encounters an example that was poorly classified, it retrains using the old hypothesis as a starting point. Note that due to the Karush-Kuhn-Tucker (KKT) conditions, it is not necessary to re-train on wellclassified examples that are outside the margins [23]. We used Platt's SMO algorithm [22] as a core SVM solver, because it is an iterative method that is well suited to converge quickly from a good initial hypothesis. Because previous work (and our own initial testing) indicates that binary feature values give the best results for spam filtering [20, 9], we optimized our implementation of the Online SMO to exploit fast inner-products with binary vectors. 1 2.3 Feature Mapping Spam Content Extracting machine learning features from text may be done in a variety of ways, especially when that text may include hyper-content and meta-content such as HTML and header information. However, previous research has shown that simple methods from text classification, such as bag of words vectors, and overlapping character-level n-grams, can achieve strong results [9]. Formally, a bag of words vector is a vector x with a unique dimension for each possible Figure 2: Tuning the Tradeoff Parameter C. Tests were conducted with Online SMO, using binary feature vectors, on the spamassassin data set of 6034 examples. Graph plots C versus Area under the ROC curve. word, defined as a contiguous substring of non-whitespace characters. An n-gram vector is a vector x with a unique dimension for each possible substring of n total characters. Note that n-grams may include whitespace, and are overlapping. We use binary feature scoring, which has been shown to be most effective for a variety of spam detection methods [20, 9]. We normalize the vectors with the Euclidean norm. Furthermore, with email data, we reduce the impact of long messages (for example, with attachments) by considering only the first 3,000 characters of each string. For blog comments and splogs, we consider the whole text, including any meta-data such as HTML tags, as given. No other feature selection or domain knowledge was used. 2.4 Tuning the Tradeoff Parameter, C The SVM tradeoff parameter C must be tuned to balance the (potentially conflicting) goals of maximizing the margin and minimizing the training error. Early work on SVM based spam detection [9] showed that high values of C give best performance with binary features. Later work has not always followed this lead: a (low) default setting of C was used on splog detection [17], and also on email spam [4]. Following standard machine learning practice, we tuned C on separate tuning data not used for later testing. We used the publicly available spamassassin email spam data set, and created an online learning task by randomly interleaving all 6034 labeled messages to create a single ordered set. For tuning, we performed a coarse parameter search for C using powers of ten from .0001 to 10000. We used the Online SVM described above, and tested both binary bag of words vectors and n-gram vectors with n = {2, 3, 4}. We used the first 3000 characters of each message, which included header information, body of the email, and possibly attachments. Following the recommendation of [6], we use Area under the ROC curve as our evaluation measure. The results (see Figure 2) agree with [9]: there is a plateau of high performance achieved with all values of C ≥ 10, and performance degrades sharply with C <1. For the remainder of our experiments with SVMs in this paper, we set C = 100. We will return to the observation that very high values of C do not degrade performance as support for the intuition that relaxed SVMs should perform well on spam. Table 1: Results for Email Spam filtering with On Table 2: Results for Blog Comment Spam Detection using SVMs and Leave One Out Cross Validation. We report the same performance measures as in the prior work for meaningful comparison. 2.5 Email Spam and Online SVMs With C tuned on a separate tuning set, we then tested the performance of Online SVMs in spam detection. We used two large benchmark data sets of email spam as our test corpora. These data sets are the 2005 TREC public data set trec05p-1 of 92,189 messages, and the 2006 TREC public data sets, trec06p, containing 37,822 messages in English. (We do not report our strong results on the trec06c corpus of Chinese messages as there have been questions raised over the validity of this test set.) We used the canonical ordering provided with each of these data sets for fair comparison. Results for these experiments, with bag of words vectors and and n-gram vectors appear in Table 1. To compare our results with previous scores on these data sets, we use the same (1-ROCA)% measure described in [6], which is one minus the area under the ROC curve, expressed as a percent. This measure shows the percent chance of error made by a classifier asserting that one message is more likely to be spam than another. These results show that Online SVMs do give state of the art performance on email spam. The only known system that out-performs the Online SVMs on the trec05p-1 data set is a recent ensemble classifier which combines the results of 53 unique spam filters [19]. To our knowledge, the Online SVM has out-performed every other single filter on these data sets, including those using Bayesian methods [5, 3], compression models [5, 3], logistic regression [10], and perceptron variants [3], the TREC competition winners [5, 3], and open source email spam filters BogoFilter v1 .1.5 and SpamProbe v1 .4 d. 2.6 Blog Comment Spam and SVMs Blog comment spam is similar to email spam in many regards, and content-based methods have been proposed for detecting these spam comments [21]. However, large benchmark data sets of labeled blog comment spam do not yet exist. Thus, we run experiments on the only publicly available data set we know of, which was used in content-based blog Table 3: Results for Splog vs. Blog Detection using SVMs and Leave One Out Cross Validation. We report the same evaluation measures as in the prior work for meaningful comparison. comment spam detection experiments by [21]. Because of the small size of the data set, and because prior researchers did not conduct their experiments in an on-line setting, we test the performance of linear SVMs using leave-one-out cross validation, with SVM-Light, a standard open-source SVM implementation [14]. We use the parameter setting C = 100, with the same feature space mappings as above. We report accuracy, precision, and recall to compare these to the results given on the same data set by [21]. These results (see Table 2) show that SVMs give superior performance on this data set to the prior methodology. 2.7 Splogs and SVMs As with blog comment spam, there is not yet a large, publicly available benchmark corpus of labeled splog detection test data. However, the authors of [17] kindly provided us with the labeled data set of 1,389 blogs and splogs that they used to test content-based splog detection using SVMs. The only difference between our methodology and that of [17] is that they used default parameters for C, which SVM-Light Table 4: Execution time for Online SVMs with email spam detection, in CPU seconds. These times do not include the time spent mapping strings to feature vectors. The number of examples in each data set is given in the last row as corpus size. Figure 3: Visualizing the effect of C. Hyperplane A maximizes the margin while accepting a small amount of training error. This corresponds to setting C to a low value. Hyperplane B accepts a smaller margin in order to reduce training error. This corresponds to setting C to a high value. Content-based spam filtering appears to do best with high values of C. ear SVMs give state of the art performance on content-based spam filtering. However, this performance comes at a price. Although the blog comment spam and splog data sets are too small for the quadratic training time of SVMs to appear problematic, the email data sets are large enough to illustrate the problems of quadratic training cost. Table 4 shows computation time versus data set size for each of the online learning tasks (on same system). The training cost of SVMs are prohibitive for large-scale content based spam detection, or a large blog host. In the following section, we reduce this cost by relaxing the expensive requirements of SVMs. 3. RELAXED ONLINE SVMS (ROSVM) One of the main benefits of SVMs is that they find a decision hyperplane that maximizes the margin between classes in the data space. Maximizing the margin is expensive, typically requiring quadratic training time in the number of training examples. However, as we saw in the previous section, the task of content-based spam detection is best achieved by SVMs with a high value of C. Setting C to a high value for this domain implies that minimizing training loss is more important than maximizing the margin (see Figure 3). Thus, while SVMs do create high performance spam filters, applying them in practice is overkill. The full margin maximization feature that they provide is unnecessary, and relaxing this requirement can reduce computational cost. We propose three ways to relax Online SVMs: 9 Reduce the size of the optimization problem by only optimizing over the last P examples. 9 Reduce the number of training updates by only training on actual errors. 9 Reduce the number of iterations in the iterative SVM avg | | X | | 2. (For normalized vectors, this default value sets C = 1.) They also tested several domain-informed feature mappings, such as giving special features to url tags. For our experiments, we used the same feature mappings as above, and tested the effect of setting C = 100. As with the methodology of [17], we performed leave one out cross validation for apples-to-apples comparison on this data. The results (see Table 3) show that a high value of C produces higher performance for the same feature space mappings, and even enables the simple 4-gram mapping to out-perform the previous best mapping which incorporated domain knowledge by using words and urls. 2.8 Computational Cost The results presented in this section demonstrate that lin Figure 4: Pseudo-code for Relaxed Online SVM. solver by allowing an approximate solution to the optimization problem. As we describe in the remainder of this subsection, all of these methods trade statistical robustness for reduced computational cost. Experimental results reported in the following section show that they equal or approach the performance of full Online SVMs on content-based spam detection. 3.1 Reducing Problem Size In the full Online SVMs, we re-optimize over the full set of seen data on every update, which becomes expensive as the number of seen data points grows. We can bound this expense by only considering the p most recent examples for optimization (see Figure 4 for pseudo-code). Note that this is not equivalent to training a new SVM classifier from scratch on the p most recent examples, because each successive optimization problem is seeded with the previous hypothesis w [8]. This hypothesis may contain values for features that do not occur anywhere in the p most recent examples, and these will not be changed. This allows the hypothesis to remember rare (but informative) features that were learned further than p examples in the past. Formally, the optimization problem is now defined most clearly in the dual form [23]. In this case, the original softmargin SVM is computed by maximizing at example n: where cj is a constant, fixed as the last value found for αj while j> (n − p). Thus, the margin found by an optimization is not guaranteed to be one that maximizes the margin for the global data set of examples {x1,..., xn)}, but rather one that satisfies a relaxed requirement that the margin be maximized over the examples {x (n − p +1),..., xn}, subject to the fixed constraints on the hyperplane that were found in previous optimizations over examples {x1,..., x (n − p)}. (For completeness, when p> n, define (n − p) = 1.) This set of constraints reduces the number of free variables in the optimization problem, reducing computational cost. 3.2 Reducing Number of Updates As noted before, the KKT conditions show that a well classified example will not change the hypothesis; thus it is not necessary to re-train when we encounter such an example. Under the KKT conditions, an example xi is considered well-classified when yif (xi)> 1. If we re-train on every example that is not well-classified, our hyperplane will be guaranteed to be optimal at every step. The number of re-training updates can be reduced by relaxing the definition of well classified. An example xi is now considered well classified when yif (xi)> M, for some 0 <M <1. Here, each update still produces an optimal hyperplane. The learner may encounter an example that lies within the margins, but farther from the margins than M. Such an example means the hypothesis is no longer globally optimal for the data set, but it is considered good enough for continued use without immediate retraining. This update procedure is similar to that used by variants of the Perceptron algorithm [18]. In the extreme case, we can set M = 0, which creates a mistake driven Online SVM. In the experimental section, we show that this version of Online SVMs, which updates only on actual errors, does not significantly degrade performance on content-based spam detection, but does significantly reduce cost. 3.3 Reducing Iterations As an iterative solver, SMO makes repeated passes over the data set to optimize the objective function. SMO has one main loop, which can alternate between passing over the entire data set, or the smaller active set of current support vectors [22]. Successive iterations of this loop bring the hyperplane closer to an optimal value. However, it is possible that these iterations provide less benefit than their expense justifies. That is, a close first approximation may be good enough. We introduce a parameter T to control the maximum number of iterations we allow. As we will see in the experimental section, this parameter can be set as low as 1 with little impact on the quality of results, providing computational savings. 4. EXPERIMENTS In Section 2, we argued that the strong performance on content-based spam detection with SVMs with a high value of C show that the maximum margin criteria is overkill, incurring unnecessary computational cost. In Section 3, we proposed ROSVM to address this issue, as both of these methods trade away guarantees on the maximum margin hyperplane in return for reduced computational cost. In this section, we test these methods on the same benchmark data sets to see if state of the art performance may be achieved by these less costly methods. We find that ROSVM is capable of achieving these high levels of performance with greatly reduced cost. Our main tests on content-based spam detection are performed on large benchmark sets of email data. We then apply these methods on the smaller data sets of blog comment spam and blogs, with similar performance. 4.1 ROSVM Tests In Section 3, we proposed three approaches for reducing the computational cost of Online SMO: reducing the prob Figure 5: Reduced Size Tests. lem size, reducing the number of optimization iterations, and reducing the number of training updates. Each of these approaches relax the maximum margin criteria on the global set of previously seen data. Here we test the effect that each of these methods has on both effectiveness and efficiency. In each of these tests, we use the large benchmark email data sets, trec05p-1 and trec06p. 4.1.1 Testing Reduced Size For our first ROSVM test, we experiment on the effect of reducing the size of the optimization problem by only considering the p most recent examples, as described in the previous section. For this test, we use the same 4-gram mappings as for the reference experiments in Section 2, with the same value C = 100. We test a range of values p in a coarse grid search. Figure 5 reports the effect of the buffer size p in relationship to the (1-ROCA)% performance measure (top), and the number of CPU seconds required (bottom). The results show that values of p <100 do result in degraded performance, although they evaluate very quickly. However, p values from 500 to 10,000 perform almost as well as the original Online SMO (represented here as p = 100, 000), at dramatically reduced computational cost. These results are important for making state of the art performance on large-scale content-based spam detection practical with online SVMs. Ordinarily, the training time would grow quadratically with the number of seen examples. However, fixing a value of p ensures that the training time is independent of the size of the data set. Furthermore, a lookback buffer allows the filter to adjust to concept drift. Figure 6: Reduced Iterations Tests. 4.1.2 Testing Reduced Iterations In the second ROSVM test, we experiment with reducing the number of iterations. Our initial tests showed that the maximum number of iterations used by Online SMO was rarely much larger than 10 on content-based spam detection; thus we tested values of T = {1, 2, 5, ∞}. Other parameters were identical to the original Online SVM tests. The results on this test were surprisingly stable (see Figure 6). Reducing the maximum number of SMO iterations per update had essentially no impact on classification performance, but did result in a moderate increase in speed. This suggests that any additional iterations are spent attempting to find improvements to a hyperplane that is already very close to optimal. These results show that for content-based spam detection, we can reduce computational cost by allowing only a single SMO iteration (that is, T = 1) with effectively equivalent performance. 4.1.3 Testing Reduced Updates For our third ROSVM experiment, we evaluate the impact of adjusting the parameter M to reduce the total number of updates. As noted before, when M = 1, the hyperplane is globally optimal at every step. Reducing M allows a slightly inconsistent hyperplane to persist until it encounters an example for which it is too inconsistent. We tested values of M from 0 to 1, at increments of 0.1. (Note that we used p = 10000 to decrease the cost of evaluating these tests.) The results for these tests are appear in Figure 7, and show that there is a slight degradation in performance with reduced values of M, and that this degradation in performance is accompanied by an increase in efficiency. Values of Figure 7: Reduced Updates Tests. M> 0.7 give effectively equivalent performance as M = 1, and still reduce cost. 4.2 Online SVMs and ROSVM We now compare ROSVM against Online SVMs on the email spam, blog comment spam, and splog detection tasks. These experiments show comparable performance on these tasks, at radically different costs. In the previous section, the effect of the different relaxation methods was tested separately. Here, we tested these methods together to create a full implementation of ROSVM. We chose the values P = 10000, T = 1, M = 0.8 for the email spam detection tasks. Note that these parameter values were selected as those allowing ROSVM to achieve comparable performance results with Online SVMs, in order to test total difference in computational cost. The splog and blog data sets were much smaller, so we set P = 100 for these tasks to allow meaningful comparisons between the reduced size and full size optimization problems. Because these values were not hand-tuned, both generalization performance and runtime results are meaningful in these experiments. 4.2.1 Experimental Setup We compared Online SVMs and ROSVM on email spam, blog comment spam, and splog detection. For the email spam, we used the two large benchmark corpora, trec05p-1 and trec06p, in the standard online ordering. We randomly ordered both the blog comment spam corpus and the splog corpus to create online learning tasks. Note that this is a different setting than the leave-one-out cross validation task presented on these corpora in Section 2--the results are not directly comparable. However, this experimental design Table 5: Email Spam Benchmark Data. These results compare Online SVM and ROSVM on email spam detection, using binary 4-gram feature space. Score reported is (1-ROCA)%, where 0 is optimal. Table 6: Blog Comment Spam. These results comparing Online SVM and ROSVM on blog comment spam detection using binary 4-gram feature space. does allow meaningful comparison between our two online methods on these content-based spam detection tasks. We ran each method on each task, and report the results in Tables 5, 6, and 7. Note that the CPU time reported for each method was generated on the same computing system. This time reflects only the time needed to complete online learning on tokenized data. We do not report the time taken to tokenize the data into binary 4-grams, as this is the same additive constant for all methods on each task. In all cases, ROSVM was significantly less expensive computationally. 4.3 Discussion The comparison results shown in Tables 5, 6, and 7 are striking in two ways. First, they show that the performance of Online SVMs can be matched and even exceeded by relaxed margin methods. Second, they show a dramatic disparity in computational cost. ROSVM is an order of magnitude more efficient than the normal Online SVM, and gives comparable results. Furthermore, the fixed lookback buffer ensures that the cost of each update does not depend on the size of the data set already seen, unlike Online SVMs. Note the blog and splog data sets are relatively small, and results on these data sets must be considered preliminary. Overall, these results show that there is no need to pay the high cost of SVMs to achieve this level of performance on contentbased detection of spam. ROSVMs offer a far cheaper alternative with little or no performance loss. 5. CONCLUSIONS In the past, academic researchers and industrial practitioners have disagreed on the best method for online contentbased detection of spam on the web. We have presented one resolution to this debate. Online SVMs do, indeed, pro Table 7: Splog Data Set. These results compare Online SVM and ROSVM on splog detection using binary 4-gram feature space. duce state-of-the-art performance on this task with proper adjustment of the tradeoff parameter C, but with cost that grows quadratically with the size of the data set. The high values of C required for best performance with SVMs show that the margin maximization of Online SVMs is overkill for this task. Thus, we have proposed a less expensive alternative, ROSVM, that relaxes this maximum margin requirement, and produces nearly equivalent results. These methods are efficient enough for large-scale filtering of contentbased spam in its many forms. It is natural to ask why the task of content-based spam detection gets strong performance from ROSVM. After all, not all data allows the relaxation of SVM requirements. We conjecture that email spam, blog comment spam, and splogs all share the characteristic that a subset of features are particularly indicative of content being either spam or not spam. These indicative features may be sparsely represented in the data set, because of spam methods such as word obfuscation, in which common spam words are intentionally misspelled in an attempt to reduce the effectiveness of word-based spam detection. Maximizing the margin may cause these sparsely represented features to be ignored, creating an overall reduction in performance. It appears that spam data is highly separable, allowing ROSVM to be successful with high values of C and little effort given to maximizing the margin. Future work will determine how applicable relaxed SVMs are to the general problem of text classification. Finally, we note that the success of relaxed SVM methods for content-based spam detection is a result that depends on the nature of spam data, which is potentially subject to change. Although it is currently true that ham and spam are linearly separable given an appropriate feature space, this assumption may be subject to attack. While our current methods appear robust against primitive attacks along these lines, such as the good word attack [24], we must explore the feasibility of more sophisticated attacks.
Relaxed Online SVMs for Spam Filtering ABSTRACT Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs. Content-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam. The former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification. However, similar performance gains have yet to be demonstrated for online spam filtering. Additionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods. In this paper, we offer a resolution to this controversy. First, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets. Second, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost. Our results are experimentally verified on email spam, blog spam, and splog detection tasks. 1. INTRODUCTION Electronic communication is increasingly plagued by unwanted or harmful content known as spam. The most well known form of spam is email spam, which remains a major problem for large email systems. Other forms of spam are also becoming problematic, including blog spam, in which spammers post unwanted comments in blogs [21], and splogs, which are fake blogs constructed to enable link spam with the hope of boosting the measured importance of a given webpage in the eyes of automated search engines [17]. There are a variety of methods for identifying these many forms of spam, including compiling blacklists of known spammers, and conducting link analysis. The approach of content analysis has shown particular promise and generality for combating spam. In content analysis, the actual message text (often including hyper-text and meta-text, such as HTML and headers) is analyzed using machine learning techniques for text classification to determine if the given content is spam. Content analysis has been widely applied in detecting email spam [11], and has also been used for identifying blog spam [21] and splogs [17]. In this paper, we do not explore the related problem of link spam, which is currently best combated by link analysis [13]. 1.1 An Anti-Spam Controversy The anti-spam community has been divided on the choice of the best machine learning method for content-based spam detection. Academic researchers have tended to favor the use of Support Vector Machines (SVMs), a statistically robust machine learning method [7] which yields state-of-theart performance on general text classification [14]. However, SVMs typically require training time that is quadratic in the number of training examples, and are impractical for largescale email systems. Practitioners requiring content-based spam filtering have typically chosen to use the faster (if less statistically robust) machine learning method of Naive Bayes text classification [11, 12, 20]. This Bayesian method requires only linear training time, and is easily implemented in an online setting with incremental updates. This allows a deployed system to easily adapt to a changing environment over time. Other fast methods for spam filtering include compression models [1] and logistic regression [10]. It has not yet been empirically demonstrated that SVMs give improved performance over these methods in an online spam detection setting [4]. 1.2 Contributions In this paper, we address the anti-spam controversy and offer a potential resolution. We first demonstrate that online SVMs do indeed provide state-of-the-art spam detection through empirical tests on several large benchmark data sets of email spam. We then analyze the effect of the tradeoff parameter in the SVM objective function, which shows that the expensive SVM methodology may, in fact, be overkill for spam detection. We reduce the computational cost of SVM learning by relaxing this requirement on the maximum margin in online settings, and create a Relaxed Online SVM, ROSVM, appropriate for high performance content-based spam filtering in large-scale settings. 2. SPAM AND ONLINE SVMS 2.1 Background: SVMs 2.2 Online SVMs 2.3 Feature Mapping Spam Content 2.4 Tuning the Tradeoff Parameter, C 2.5 Email Spam and Online SVMs 2.6 Blog Comment Spam and SVMs 2.7 Splogs and SVMs 3. RELAXED ONLINE SVMS (ROSVM) 2.8 Computational Cost 3.1 Reducing Problem Size 3.2 Reducing Number of Updates 3.3 Reducing Iterations 4. EXPERIMENTS 4.1 ROSVM Tests 4.1.1 Testing Reduced Size 4.1.2 Testing Reduced Iterations 4.1.3 Testing Reduced Updates 4.2 Online SVMs and ROSVM 4.2.1 Experimental Setup 4.3 Discussion 5. CONCLUSIONS In the past, academic researchers and industrial practitioners have disagreed on the best method for online contentbased detection of spam on the web. We have presented one resolution to this debate. Online SVMs do, indeed, pro Table 7: Splog Data Set. These results compare Online SVM and ROSVM on splog detection using binary 4-gram feature space. duce state-of-the-art performance on this task with proper adjustment of the tradeoff parameter C, but with cost that grows quadratically with the size of the data set. The high values of C required for best performance with SVMs show that the margin maximization of Online SVMs is overkill for this task. Thus, we have proposed a less expensive alternative, ROSVM, that relaxes this maximum margin requirement, and produces nearly equivalent results. These methods are efficient enough for large-scale filtering of contentbased spam in its many forms. It is natural to ask why the task of content-based spam detection gets strong performance from ROSVM. After all, not all data allows the relaxation of SVM requirements. We conjecture that email spam, blog comment spam, and splogs all share the characteristic that a subset of features are particularly indicative of content being either spam or not spam. These indicative features may be sparsely represented in the data set, because of spam methods such as word obfuscation, in which common spam words are intentionally misspelled in an attempt to reduce the effectiveness of word-based spam detection. Maximizing the margin may cause these sparsely represented features to be ignored, creating an overall reduction in performance. It appears that spam data is highly separable, allowing ROSVM to be successful with high values of C and little effort given to maximizing the margin. Future work will determine how applicable relaxed SVMs are to the general problem of text classification. Finally, we note that the success of relaxed SVM methods for content-based spam detection is a result that depends on the nature of spam data, which is potentially subject to change. Although it is currently true that ham and spam are linearly separable given an appropriate feature space, this assumption may be subject to attack. While our current methods appear robust against primitive attacks along these lines, such as the good word attack [24], we must explore the feasibility of more sophisticated attacks.
Relaxed Online SVMs for Spam Filtering ABSTRACT Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs. Content-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam. The former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification. However, similar performance gains have yet to be demonstrated for online spam filtering. Additionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods. In this paper, we offer a resolution to this controversy. First, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets. Second, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost. Our results are experimentally verified on email spam, blog spam, and splog detection tasks. 1. INTRODUCTION Electronic communication is increasingly plagued by unwanted or harmful content known as spam. The most well known form of spam is email spam, which remains a major problem for large email systems. There are a variety of methods for identifying these many forms of spam, including compiling blacklists of known spammers, and conducting link analysis. The approach of content analysis has shown particular promise and generality for combating spam. In content analysis, the actual message text (often including hyper-text and meta-text, such as HTML and headers) is analyzed using machine learning techniques for text classification to determine if the given content is spam. Content analysis has been widely applied in detecting email spam [11], and has also been used for identifying blog spam [21] and splogs [17]. In this paper, we do not explore the related problem of link spam, which is currently best combated by link analysis [13]. 1.1 An Anti-Spam Controversy The anti-spam community has been divided on the choice of the best machine learning method for content-based spam detection. Academic researchers have tended to favor the use of Support Vector Machines (SVMs), a statistically robust machine learning method [7] which yields state-of-theart performance on general text classification [14]. However, SVMs typically require training time that is quadratic in the number of training examples, and are impractical for largescale email systems. Practitioners requiring content-based spam filtering have typically chosen to use the faster (if less statistically robust) machine learning method of Naive Bayes text classification [11, 12, 20]. This Bayesian method requires only linear training time, and is easily implemented in an online setting with incremental updates. Other fast methods for spam filtering include compression models [1] and logistic regression [10]. It has not yet been empirically demonstrated that SVMs give improved performance over these methods in an online spam detection setting [4]. 1.2 Contributions In this paper, we address the anti-spam controversy and offer a potential resolution. We first demonstrate that online SVMs do indeed provide state-of-the-art spam detection through empirical tests on several large benchmark data sets of email spam. parameter in the SVM objective function, which shows that the expensive SVM methodology may, in fact, be overkill for spam detection. We reduce the computational cost of SVM learning by relaxing this requirement on the maximum margin in online settings, and create a Relaxed Online SVM, ROSVM, appropriate for high performance content-based spam filtering in large-scale settings. 5. CONCLUSIONS In the past, academic researchers and industrial practitioners have disagreed on the best method for online contentbased detection of spam on the web. We have presented one resolution to this debate. Online SVMs do, indeed, pro Table 7: Splog Data Set. These results compare Online SVM and ROSVM on splog detection using binary 4-gram feature space. The high values of C required for best performance with SVMs show that the margin maximization of Online SVMs is overkill for this task. These methods are efficient enough for large-scale filtering of contentbased spam in its many forms. It is natural to ask why the task of content-based spam detection gets strong performance from ROSVM. We conjecture that email spam, blog comment spam, and splogs all share the characteristic that a subset of features are particularly indicative of content being either spam or not spam. These indicative features may be sparsely represented in the data set, because of spam methods such as word obfuscation, in which common spam words are intentionally misspelled in an attempt to reduce the effectiveness of word-based spam detection. Maximizing the margin may cause these sparsely represented features to be ignored, creating an overall reduction in performance. It appears that spam data is highly separable, allowing ROSVM to be successful with high values of C and little effort given to maximizing the margin. Future work will determine how applicable relaxed SVMs are to the general problem of text classification. Finally, we note that the success of relaxed SVM methods for content-based spam detection is a result that depends on the nature of spam data, which is potentially subject to change. Although it is currently true that ham and spam are linearly separable given an appropriate feature space, this assumption may be subject to attack.
J-36
Playing Games in Many Possible Worlds
In traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game-either full omniscient knowledge or partial but fixed information. In real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences. In this paper, we model this phenomenon. We imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting Socratic game theory. In a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world. Players can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action. We consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made. The results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players. When the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable- and unobservable-query models. When the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservable-query Socratic games and correlated equilibria in observable-query Socratic games.
[ "game theori", "socrat game", "priori probabl distribut", "constant-sum game", "algorithm", "game-either full omnisci knowledg", "questionand-answer session", "nash equilibrium", "unobserv-queri model", "miss inform", "auction", "arbitrari partial inform", "strateg multiplay environ", "observ-queri model", "inform acquisit", "correl equilibrium" ]
[ "P", "P", "P", "P", "P", "M", "M", "M", "M", "M", "U", "M", "M", "M", "M", "M" ]
Playing Games in Many Possible Worlds Matt Lepinski∗ , David Liben-Nowell† , Seth Gilbert∗ , and April Rasala Lehman‡ (∗ ) Computer Science and Artificial Intelligence Laboratory, MIT; Cambridge, MA 02139 († ) Department of Computer Science, Carleton College; Northfield, MN 55057 (‡ ) Google, Inc.; Mountain View, CA 94043 lepinski,sethg@theory.lcs.mit.edu, dlibenno@carleton.edu, alehman@google.com ABSTRACT In traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game-either full omniscient knowledge or partial but fixed information. In real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences. In this paper, we model this phenomenon. We imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting Socratic game theory. In a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world. Players can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action. We consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made. The results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players. When the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable- and unobservable-query models. When the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games. Categories and Subject Descriptors F.2 [Theory of Computation]: Analysis of algorithms and problem complexity; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Economics, Theory 1. INTRODUCTION Late October 1960. A smoky room. Democratic Party strategists huddle around a map. How should the Kennedy campaign allocate its remaining advertising budget? Should it focus on, say, California or New York? The Nixon campaign faces the same dilemma. Of course, neither campaign knows the effectiveness of its advertising in each state. Perhaps Californians are susceptible to Nixon``s advertising, but are unresponsive to Kennedy``s. In light of this uncertainty, the Kennedy campaign may conduct a survey, at some cost, to estimate the effectiveness of its advertising. Moreover, the larger-and more expensive-the survey, the more accurate it will be. Is the cost of a survey worth the information that it provides? How should one balance the cost of acquiring more information against the risk of playing a game with higher uncertainty? In this paper, we model situations of this type as Socratic games. As in traditional game theory, the players in a Socratic game choose actions to maximize their payoffs, but we model players with incomplete information who can make costly queries to reduce their uncertainty about the state of the world before they choose their actions. This approach contrasts with traditional game theory, in which players are usually modeled as having fixed, exogenously given information about the structure of the game and its payoffs. (In traditional games of incomplete and imperfect information, there is information that the players do not have; in Socratic games, unlike in these games, the players have a chance to acquire the missing information, at some cost.) A number of related models have been explored by economists and computer scientists motivated by similar situations, often with a focus on mechanism design and auctions; a sampling of this research includes the work of Larson and Sandholm [41, 42, 43, 44], Parkes [59], Fong [22], Compte and Jehiel [12], Rezende [63], Persico and Matthews [48, 60], Cr´emer and Khalil [15], Rasmusen [62], and Bergemann and V¨alim¨aki [4, 5]. The model of Bergemann and V¨alim¨aki is similar in many regards to the one that we explore here; see Section 7 for some discussion. A Socratic game proceeds as follows. A real world is cho150 sen randomly from a set of possible worlds according to a common prior distribution. Each player then selects an arbitrary query from a set of available costly queries and receives a corresponding piece of information about the real world. Finally each player selects an action and receives a payoff-a function of the players'' selected actions and the identity of the real world-less the cost of the query that he or she made. Compared to traditional game theory, the distinguishing feature of our model is the introduction of explicit costs to the players for learning arbitrary partial information about which of the many possible worlds is the real world. Our research was initially inspired by recent results in psychology on decision making, but it soon became clear that Socratic game theory is also a general tool for understanding the exploitation versus exploration tradeoff, well studied in machine learning, in a strategic multiplayer environment. This tension between the risk arising from uncertainty and the cost of acquiring information is ubiquitous in economics, political science, and beyond. Our results. We consider Socratic games under two models: an unobservable-query model where players learn only the response to their own queries and an observable-query model where players also learn which queries their opponents made. We give efficient algorithms to find Nash equilibriai.e., tuples of strategies from which no player has unilateral incentive to deviate-in broad classes of two-player Socratic games in both models. Our first result is an efficient algorithm to find Nash equilibria in unobservable-query Socratic games with constant-sum worlds, in which the sum of the players'' payoffs is independent of their actions. Our techniques also yield Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds. Strategically zero-sum games generalize constant-sum games by allowing the sum of the players'' payoffs to depend on individual players'' choices of strategy, but not on any interaction of their choices. Our second result is an efficient algorithm to find Nash equilibria in observable-query Socratic games with constant-sum worlds. Finally, we give an efficient algorithm to find correlated equilibria-a weaker but increasingly well-studied solution concept for games [2, 3, 32, 56, 57]-in observable-query Socratic games with strategically zero-sum worlds. Like all games, Socratic games can be viewed as a special case of extensive-form games, which represent games by trees in which internal nodes represent choices made by chance or by the players, and the leaves represent outcomes that correspond to a vector of payoffs to the players. Algorithmically, the generality of extensive-form games makes them difficult to solve efficiently, and the special cases that are known to be efficiently solvable do not include even simple Socratic games. Every (complete-information) classical game is a trivial Socratic game (with a single possible world and a single trivial query), and efficiently finding Nash equilibria in classical games has been shown to be hard [10, 11, 13, 16, 17, 27, 54, 55]. Therefore we would not expect to find a straightforward polynomial-time algorithm to compute Nash equilibria in general Socratic games. However, it is well known that Nash equilibria can be found efficiently via an LP for two-player constant-sum games [49, 71] (and strategically zero-sum games [51]). A Socratic game is itself a classical game, so one might hope that these results can be applied to Socratic games with constant-sum (or strategically zero-sum) worlds. We face two major obstacles in extending these classical results to Socratic games. First, a Socratic game with constant-sum worlds is not itself a constant-sum classical game-rather, the resulting classical game is only strategically zero sum. Worse yet, a Socratic game with strategically zero-sum worlds is not itself classically strategically zero sum-indeed, there are no known efficient algorithmic techniques to compute Nash equilibria in the resulting class of classical games. (Exponential-time algorithms like Lemke/Howson, of course, can be used [45].) Thus even when it is easy to find Nash equilibria in each of the worlds of a Socratic game, we require new techniques to solve the Socratic game itself. Second, even when the Socratic game itself is strategically zero sum, the number of possible strategies available to each player is exponential in the natural representation of the game. As a result, the standard linear programs for computing equilibria have an exponential number of variables and an exponential number of constraints. For unobservable-query Socratic games with strategically zero-sum worlds, we address these obstacles by formulating a new LP that uses only polynomially many variables (though still an exponential number of constraints) and then use ellipsoid-based techniques to solve it. For observablequery Socratic games, we handle the exponentiality by decomposing the game into stages, solving the stages separately, and showing how to reassemble the solutions efficiently. To solve the stages, it is necessary to find Nash equilibria in Bayesian strategically zero-sum games, and we give an explicit polynomial-time algorithm to do so. 2. GAMES AND SOCRATIC GAMES In this section, we review background on game theory and formally introduce Socratic games. We present these models in the context of two-player games, but the multiplayer case is a natural extension. Throughout the paper, boldface variables will be used to denote a pair of variables (e.g., a = ai, aii ). Let Pr[x ← π] denote the probability that a particular value x is drawn from the distribution π, and let Ex∼π[g(x)] denote the expectation of g(x) when x is drawn from π. 2.1 Background on Game Theory Consider two players, Player I and Player II, each of whom is attempting to maximize his or her utility (or payoff). A (two-player) game is a pair A, u , where, for i ∈ {i,ii}, • Ai is the set of pure strategies for Player i, and A = Ai, Aii ; and • ui : A → R is the utility function for Player i, and u = ui, uii . We require that A and u be common knowledge. If each Player i chooses strategy ai ∈ Ai, then the payoffs to Players I and II are ui(a) and uii(a), respectively. A game is constant sum if, for all a ∈ A, we have that ui(a) + uii(a) = c for some fixed c independent of a. Player i can also play a mixed strategy αi ∈ Ai, where Ai denotes the space of probability measures over the set Ai. Payoff functions are generalized as ui (α) = ui (αi, αii) := Ea∼α[ui (a)] = P a∈A α(a)ui (a), where the quantity α(a) = 151 αi(ai) · αii(aii) denotes the joint probability of the independent events that each Player i chooses action ai from the distribution αi. This generalization to mixed strategies is known as von Neumann/Morgenstern utility [70], in which players are indifferent between a guaranteed payoff x and an expected payoff of x. A Nash equilibrium is a pair α of mixed strategies so that neither player has an incentive to change his or her strategy unilaterally. Formally, the strategy pair α is a Nash equilibrium if and only if both ui(αi, αii) = maxαi∈Ai ui(αi, αii) and uii(αi, αii) = maxαii∈Aii uii(αi, αii); that is, the strategies αi and αii are mutual best responses. A correlated equilibrium is a distribution ψ over A that obeys the following: if a ∈ A is drawn randomly according to ψ and Player i learns ai, then no Player i has incentive to deviate unilaterally from playing ai. (A Nash equilibrium is a correlated equilibrium in which ψ(a) = αi(ai) · αii(aii) is a product distribution.) Formally, in a correlated equilibrium, for every a ∈ A we must have that ai is a best response to a randomly chosen ˆaii ∈ Aii drawn according to ψ(ai, ˆaii), and the analogous condition must hold for Player II. 2.2 Socratic Games In this section, we formally define Socratic games. A Socratic game is a 7-tuple A, W, u, S, Q, p, δ , where, for i ∈ {i,ii}: • Ai is, as before, the set of pure strategies for Player i. • W is a set of possible worlds, one of which is the real world wreal. • ui = {uw i : A → R | w ∈ W} is a set of payoff functions for Player i, one for each possible world. • S is a set of signals. • Qi is a set of available queries for Player i. When Player i makes query qi : W → S, he or she receives the signal qi(wreal). When Player i receives signal qi(wreal) in response to query qi, he or she can infer that wreal ∈ {w : qi(w) = qi(wreal)}, i.e., the set of possible worlds from which query qi cannot distinguish wreal. • p : W → [0, 1] is a probability distribution over the possible worlds. • δi : Qi → R≥0 gives the query cost for each available query for Player i. Initially, the world wreal is chosen according to the probability distribution p, but the identity of wreal remains unknown to the players. That is, it is as if the players are playing the game A, uwreal but do not know wreal. The players make queries q ∈ Q, and Player i receives the signal qi(wreal). We consider both observable queries and unobservable queries. When queries are observable, each player learns which query was made by the other player, and the results of his or her own query-that is, each Player i learns qi, qii, and qi(wreal). For unobservable queries, Player i learns only qi and qi(wreal). After learning the results of the queries, the players select strategies a ∈ A and receive as payoffs u wreal i (a) − δi(qi). In the Socratic game, a pure strategy for Player i consists of a query qi ∈ Qi and a response function mapping any result of the query qi to a strategy ai ∈ Ai to play. A player``s state of knowledge after a query is a point in R := Q × S or Ri := Qi × S for observable or unobservable queries, respectively. Thus Player i``s response function maps R or Ri to Ai. Note that the number of pure strategies is exponential, as there are exponentially many response functions. A mixed strategy involves both randomly choosing a query qi ∈ Qi and randomly choosing an action ai ∈ Ai in response to the results of the query. Formally, we will consider a mixed-strategy-function profile f = fquery , fresp to have two parts: • a function fquery i : Qi → [0, 1], where fquery i (qi) is the probability that Player i makes query qi. • a function fresp i that maps R or Ri to a probability distribution over actions. Player i chooses an action ai ∈ Ai according to the probability distribution fresp i (q, qi(w)) for observable queries, and according to fresp i (qi, qi(w)) for unobservable queries. (With unobservable queries, for example, the probability that Player I plays action ai conditioned on making query qi in world w is given by Pr[ai ← fresp i (qi, qi(w))].) Mixed strategies are typically defined as probability distributions over the pure strategies, but here we represent a mixed strategy by a pair fquery , fresp , which is commonly referred to as a behavioral strategy in the game-theory literature. As in any game with perfect recall, one can easily map a mixture of pure strategies to a behavioral strategy f = fquery , fresp that induces the same probability of making a particular query qi or playing a particular action after making a query qi in a particular world. Thus it suffices to consider only this representation of mixed strategies. For a strategy-function profile f for observable queries, the (expected) payoff to Player i is given by X q∈Q,w∈W,a∈A 2 6 6 4 fquery i (qi) · fquery ii (qii) · p(w) · Pr[ai ← fresp i (q, qi(w))] · Pr[aii ← fresp ii (q, qii(w))] · (uw i (a) − δi(qi)) 3 7 7 5 . The payoffs for unobservable queries are analogous, with fresp j (qj, qj(w)) in place of fresp j (q, qj(w)). 3. STRATEGICALLY ZERO-SUM GAMES We can view a Socratic game G with constant-sum worlds as an exponentially large classical game, with pure strategies make query qi and respond according to fi. However, this classical game is not constant sum. The sum of the players'' payoffs varies depending upon their strategies, because different queries incur different costs. However, this game still has significant structure: the sum of payoffs varies only because of varying query costs. Thus the sum of payoffs does depend on players'' choice of strategies, but not on the interaction of their choices-i.e., for fixed functions gi and gii, we have ui(q, f) + uii(q, f) = gi(qi, fi) + gii(qii, fii) for all strategies q, f . Such games are called strategically zero sum and were introduced by Moulin and Vial [51], who describe a notion of strategic equivalence and define strategically zero-sum games as those strategically equivalent to zero-sum games. It is interesting to note that two Socratic games with the same queries and strategically equivalent worlds are not necessarily strategically equivalent. A game A, u is strategically zero sum if there exist labels (i, ai) for every Player i and every pure strategy ai ∈ Ai 152 such that, for all mixed-strategy profiles α, we have that the sum of the utilities satisfies ui(α)+uii(α) = X ai∈Ai αi(ai)· (i, ai)+ X aii∈Aii αii(aii)· (ii, aii). Note that any constant-sum game is strategically zero sum as well. It is not immediately obvious that one can efficiently decide if a given game is strategically zero sum. For completeness, we give a characterization of classical strategically zero-sum games in terms of the rank of a simple matrix derived from the game``s payoffs, allowing us to efficiently decide if a given game is strategically zero sum and, if it is, to compute the labels (i, ai). Theorem 3.1. Consider a game G = A, u with Ai = {a1 i , ... , ani i }. Let MG be the ni-by-nii matrix whose i, j th entry MG (i,j) satisfies log2 MG (i,j) = ui(ai i , aj ii) + uii(ai i , aj ii). Then the following are equivalent: (i) G is strategically zero sum; (ii) there exist labels (i, ai) for every player i ∈ {i,ii} and every pure strategy ai ∈ Ai such that, for all pure strategies a ∈ A, we have ui(a) + uii(a) = (i, ai) + (ii, aii); and (iii) rank(MG ) = 1. Proof Sketch. (i ⇒ ii) is immediate; every pure strategy is a trivially mixed strategy. For (ii ⇒ iii), let ci be the n-element column vector with jth component 2 (i,a j i ) ; then ci · cii T = MG . For (iii ⇒ i), if rank(MG ) = 1, then MG = u · vT . We can prove that G is strategically zero sum by choosing labels (i, aj i ) := log2 uj and (ii, aj ii) := log2 vj. 4. SOCRATIC GAMES WITH UNOBSERVABLE QUERIES We begin with Socratic games with unobservable queries, where a player``s choice of query is not revealed to her opponent. We give an efficient algorithm to solve unobservablequery Socratic games with strategically zero-sum worlds. Our algorithm is based upon the LP shown in Figure 1, whose feasible points are Nash equilibria for the game. The LP has polynomially many variables but exponentially many constraints. We give an efficient separation oracle for the LP, implying that the ellipsoid method [28, 38] yields an efficient algorithm. This approach extends the techniques of Koller and Megiddo [39] (see also [40]) to solve constant-sum games represented in extensive form. (Recall that their result does not directly apply in our case; even a Socratic game with constant-sum worlds is not a constant-sum classical game.) Lemma 4.1. Let G = A, W, u, S, Q, p, δ be an arbitrary unobservable-query Socratic game with strategically zero-sum worlds. Any feasible point for the LP in Figure 1 can be efficiently mapped to a Nash equilibrium for G, and any Nash equilibrium for G can be mapped to a feasible point for the program. Proof Sketch. We begin with a description of the correspondence between feasible points for the LP and Nash equilibria for G. First, suppose that strategy profile f = fquery , fresp forms a Nash equilibrium for G. Then the following setting for the LP variables is feasible: yi qi = fquery i (qi) xi ai,qi,w = Pr[ai ← fresp i (qi, qi(w))] · yi qi ρi = P w,q∈Q,a∈A p(w) · xi ai,qi,w · xii aii,qii,w · [uw i (a) − δi(qi)]. (We omit the straightforward calculations that verify feasibility.) Next, suppose xi ai,qi,w, yi qi , ρi is feasible for the LP. Let f be the strategy-function profile defined as fquery i : qi → yi qi fresp i (qi, qi(w)) : ai → xi ai,qi,w/yi qi . Verifying that this strategy profile is a Nash equilibrium requires checking that fresp i (qi, qi(w)) is a well-defined function (from constraint VI), that fquery i and fresp i (qi, qi(w)) are probability distributions (from constraints III and IV), and that each player is playing a best response to his or her opponent``s strategy (from constraints I and II). Finally, from constraints I and II, the expected payoff to Player i is at most ρi. Because the right-hand side of constraint VII is equal to the expected sum of the payoffs from f and is at most ρi + ρii, the payoffs are correct and imply the lemma. We now give an efficient separation oracle for the LP in Figure 1, thus allowing the ellipsoid method to solve the LP in polynomial time. Recall that a separation oracle is a function that, given a setting for the variables in the LP, either returns feasible or returns a particular constraint of the LP that is violated by that setting of the variables. An efficient, correct separation oracle allows us to solve the LP efficiently via the ellipsoid method. Lemma 4.2. There exists a separation oracle for the LP in Figure 1 that is correct and runs in polynomial time. Proof. Here is a description of the separation oracle SP. On input xi ai,qi,w, yi qi , ρi : 1. Check each of the constraints (III), (IV), (V), (VI), and (VII). If any one of these constraints is violated, then return it. 2. Define the strategy profile f as follows: fquery i : qi → yi qi fresp i (qi, qi(w)) : ai → xi ai,qi,w/yi qi For each query qi, we will compute a pure best-response function ˆf qi i for Player I to strategy fii after making query qi. More specifically, given fii and the result qi(wreal) of the query qi, it is straightforward to compute the probability that, conditioned on the fact that the result of query qi is qi(w), the world is w and Player II will play action aii ∈ Aii. Therefore, for each query qi and response qi(w), Player I can compute the expected utility of each pure response ai to the induced mixed strategy over Aii for Player II. Player I can then select the ai maximizing this expected payoff. Let ˆfi be the response function such that ˆfi(qi, qi(w)) = ˆf qi i (qi(w)) for every qi ∈ Qi. Similarly, compute ˆfii. 153 Player i does not prefer `make query qi, then play according to the function fi'' : ∀qi ∈ Qi, fi : Ri → Ai : ρi ≥ P w∈W,aii∈Aii,qii∈Qii,ai=fi(qi,qi(w)) ` p(w) · xii aii,qii,w · [uw i (a) − δi(qi)] ´ (I) ∀qii ∈ Qii, fii : Rii → Aii : ρii ≥ P w∈W,ai∈Ai,qi∈Qi,aii=fii(qii,qii(w)) ` p(w) · xi ai,qi,w · [uw ii (a) − δii(qii)] ´ (II) Every player``s choices form a probability distribution in every world: ∀i ∈ {i,ii}, w ∈ W : 1 = P ai∈Ai,qi∈Qi xi ai,qi,w (III) ∀i ∈ {i,ii}, w ∈ W : 0 ≤ xi ai,qi,w (IV) Queries are independent of the world, and actions depend only on query output: ∀i ∈ {i,ii}, qi ∈ Qi, w ∈ W, w ∈ W such that qi(w) = qi(w ) : yi qi = P ai∈Ai xi ai,qi,w (V) xi ai,qi,w = xi ai,qi,w (VI) The payoffs are consistent with the labels (i, ai, w): ρi + ρii = P i∈{i,ii} P w∈W,qi∈Qi,ai∈Ai ` p(w) · xi ai,qi,w · [ (i, ai, w) − δi(qi)] ´ (VII) Figure 1: An LP to find Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds. The input is a Socratic game A, W, u, S, Q, p, δ so that world w is strategically zero sum with labels (i, ai, w). Player i makes query qi ∈ Qi with probability yi qi and, when the actual world is w ∈ W, makes query qi and plays action ai with probability xi ai,qi,w. The expected payoff to Player i is given by ρi. 3. Let ˆρ qi i be the expected payoff to Player I using the strategy make query qi and play response function ˆfi if Player II plays according to fii. Let ˆρi = maxqi∈Qq ˆρ qi i and let ˆqi = arg maxqi∈Qq ˆρ qi i . Similarly, define ˆρ qii ii , ˆρii, and ˆqii. 4. For the ˆfi and ˆqi defined in Step 3, return constraint (I-ˆqi- ˆfi) or (II-ˆqii- ˆfii) if either is violated. If both are satisfied, then return feasible. We first note that the separation oracle runs in polynomial time and then prove its correctness. Steps 1 and 4 are clearly polynomial. For Step 2, we have described how to compute the relevant response functions by examining every action of Player I, every world, every query, and every action of Player II. There are only polynomially many queries, worlds, query results, and pure actions, so the running time of Steps 2 and 3 is thus polynomial. We now sketch the proof that the separation oracle works correctly. The main challenge is to show that if any constraint (I-qi-fi ) is violated then (I-ˆqi- ˆfi) is violated in Step 4. First, we observe that, by construction, the function ˆfi computed in Step 3 must be a best response to Player II playing fii, no matter what query Player I makes. Therefore the strategy make query ˆqi, then play response function ˆfi must be a best response to Player II playing fii, by definition of ˆqi. The right-hand side of each constraint (I-qi-fi ) is equal to the expected payoff that Player I receives when playing the pure strategy make query qi and then play response function fi against Player II``s strategy of fii. Therefore, because the pure strategy make query ˆqi and then play response function ˆfi is a best response to Player II playing fii, the right-hand side of constraint (I-ˆqi- ˆfi) is at least as large as the right hand side of any constraint (I-ˆqi-fi ). Therefore, if any constraint (I-qi-fi ) is violated, constraint (I-ˆqi- ˆfi) is also violated. An analogous argument holds for Player II. These lemmas and the well-known fact that Nash equilibria always exist [52] imply the following theorem: Theorem 4.3. Nash equilibria can be found in polynomial time for any two-player unobservable-query Socratic game with strategically zero-sum worlds. 5. SOCRATIC GAMES WITH OBSERVABLE QUERIES In this section, we give efficient algorithms to find (1) a Nash equilibrium for observable-query Socratic games with constant-sum worlds and (2) a correlated equilibrium in the broader class of Socratic games with strategically zero-sum worlds. Recall that a Socratic game G = A, W, u, S, Q, p, δ with observable queries proceeds in two stages: Stage 1: The players simultaneously choose queries q ∈ Q. Player i receives as output qi, qii, and qi(wreal). Stage 2: The players simultaneously choose strategies a ∈ A. The payoff to Player i is u wreal i (a) − δi(qi). Using backward induction, we first solve Stage 2 and then proceed to the Stage-1 game. For a query q ∈ Q, we would like to analyze the Stage-2 game ˆGq resulting from the players making queries q in Stage 1. Technically, however, ˆGq is not actually a game, because at the beginning of Stage 2 the players have different information about the world: Player I knows qi(wreal), and 154 Player II knows qii(wreal). Fortunately, the situation in which players have asymmetric private knowledge has been well studied in the game-theory literature. A Bayesian game is a quadruple A, T, r, u , where: • Ai is the set of pure strategies for Player i. • Ti is the set of types for Player i. • r is a probability distribution over T; r(t) denotes the probability that Player i has type ti for all i. • ui : A × T → R is the payoff function for Player i. If the players have types t and play pure strategies a, then ui(a, t) denotes the payoff for Player i. Initially, a type t is drawn randomly from T according to the distribution r. Player i learns his type ti, but does not learn any other player``s type. Player i then plays a mixed strategy αi ∈ Ai-that is, a probability distribution over Ai-and receives payoff ui(α, t). A strategy function is a function hi : Ti → Ai; Player i plays the mixed strategy hi(ti) ∈ Ai when her type is ti. A strategy-function profile h is a Bayesian Nash equilibrium if and only if no Player i has unilateral incentive to deviate from hi if the other players play according to h. For a two-player Bayesian game, if α = h(t), then the profile h is a Bayesian Nash equilibrium exactly when the following condition and its analogue for Player II hold: Et∼r[ui(α, t)] = maxhi Et∼r[ui( hi(ti), αii , t)]. These conditions hold if and only if, for all ti ∈ Ti occurring with positive probability, Player i``s expected utility conditioned on his type being ti is maximized by hi(ti). A Bayesian game is constant sum if for all a ∈ A and all t ∈ T, we have ui(a, t) + uii(a, t) = ct, for some constant ct independent of a. A Bayesian game is strategically zero sum if the classical game A, u(·, t) is strategically zero sum for every t ∈ T. Whether a Bayesian game is strategically zero sum can be determined as in Theorem 3.1. (For further discussion of Bayesian games, see [25, 31].) We now formally define the Stage-2 game as a Bayesian game. Given a Socratic game G = A, W, u, S, Q, p, δ and a query profile q ∈ Q, we define the Stage-2 Bayesian game Gstage2(q) := A, Tq , pstage2(q) , ustage2(q) , where: • Ai, the set of pure strategies for Player i, is the same as in the original Socratic game; • Tq i = {qi(w) : w ∈ W}, the set of types for Player i, is the set of signals that can result from query qi; • pstage2(q) (t) = Pr[q(w) = t | w ← p]; and • u stage2(q) i (a, t) = P w∈W Pr[w ← p | q(w) = t] · uw i (a). We now define the Stage-1 game in terms of the payoffs for the Stage-2 games. Fix any algorithm alg that finds a Bayesian Nash equilibrium hq,alg := alg(Gstage2(q)) for each Stage-2 game. Define valuealg i (Gstage2(q)) to be the expected payoff received by Player i in the Bayesian game Gstage2(q) if each player plays according to hq,alg , that is, valuealg i (Gstage2(q)) := P w∈W p(w) · u stage2(q) i (hq,alg (q(w)), q(w)). Define the game Galg stage1 := Astage1 , ustage1(alg) , where: • Astage1 := Q, the set of available queries in the Socratic game; and • u stage1(alg) i (q) := valuealg i (Gstage2(q)) − δi(qi). I.e., players choose queries q and receive payoffs corresponding to valuealg (Gstage2(q)), less query costs. Lemma 5.1. Consider an observable-query Socratic game G = A, W, u, S, Q, p, δ . Let Gstage2(q) be the Stage-2 games for all q ∈ Q, let alg be an algorithm finding a Bayesian Nash equilibrium in each Gstage2(q), and let Galg stage1 be the Stage-1 game. Let α be a Nash equilibrium for Galg stage1, and let hq,alg := alg(Gstage2(q)) be a Bayesian Nash equilibrium for each Gstage2(q). Then the following strategy profile is a Nash equilibrium for G: • In Stage 1, Player i makes query qi with probability αi(qi). (That is, set fquery (q) := α(q).) • In Stage 2, if q is the query in Stage 1 and qi(wreal) denotes the response to Player i``s query, then Player i chooses action ai with probability hq,alg i (qi(wreal)). (In other words, set fresp i (q, qi(w)) := hq,alg i (qi(w)).) We now find equilibria in the stage games for Socratic games with constant- or strategically zero-sum worlds. We first show that the stage games are well structured in this setting: Lemma 5.2. Consider an observable-query Socratic game G = A, W, u, S, Q, p, δ with constant-sum worlds. Then the Stage-1 game Galg stage1 is strategically zero sum for every algorithm alg, and every Stage-2 game Gstage2(q) is Bayesian constant sum. If the worlds of G are strategically zero sum, then every Gstage2(q) is Bayesian strategically zero sum. We now show that we can efficiently compute equilibria for these well-structured stage games. Theorem 5.3. There exists a polynomial-time algorithm BNE finding Bayesian Nash equilibria in strategically zerosum Bayesian (and thus classical strategically zero-sum or Bayesian constant-sum) two-player games. Proof Sketch. Let G = A, T, r, u be a strategically zero-sum Bayesian game. Define an unobservable-query Socratic game G∗ with one possible world for each t ∈ T, one available zero-cost query qi for each Player i so that qi reveals ti, and all else as in G. Bayesian Nash equilibria in G correspond directly to Nash equilibria in G∗ , and the worlds of G∗ are strategically zero sum. Thus by Theorem 4.3 we can compute Nash equilibria for G∗ , and thus we can compute Bayesian Nash equilibria for G. (LP``s for zero-sum two-player Bayesian games have been previously developed and studied [61].) Theorem 5.4. We can compute a Nash equilibrium for an arbitrary two-player observable-query Socratic game G = A, W, u, S, Q, p, δ with constant-sum worlds in polynomial time. Proof. Because each world of G is constant sum, Lemma 5.2 implies that the induced Stage-2 games Gstage2(q) are all Bayesian constant sum. Thus we can use algorithm BNE to compute a Bayesian Nash equilibrium hq,BNE := BNE(Gstage2(q)) for each q ∈ Q, by Theorem 5.3. Furthermore, again by Lemma 5.2, the induced Stage-1 game GBNE stage1 is classical strategically zero sum. Therefore we can again use algorithm BNE to compute a Nash equilibrium α := BNE(GBNE stage1), again by Theorem 5.3. Therefore, by Lemma 5.1, we can assemble α and the hq,BNE ``s into a Nash equilibrium for the Socratic game G. 155 We would like to extend our results on observable-query Socratic games to Socratic games with strategically zerosum worlds. While we can still find Nash equilibria in the Stage-2 games, the resulting Stage-1 game is not in general strategically zero sum. Thus, finding Nash equilibria in observable-query Socratic games with strategically zerosum worlds seems to require substantially new techniques. However, our techniques for decomposing observable-query Socratic games do allow us to find correlated equilibria in this case. Lemma 5.5. Consider an observable-query Socratic game G = A, W, u, S, Q, p, δ . Let alg be an arbitrary algorithm that finds a Bayesian Nash equilibrium in each of the derived Stage-2 games Gstage2(q), and let Galg stage1 be the derived Stage1 game. Let φ be a correlated equilibrium for Galg stage1, and let hq,alg := alg(Gstage2(q)) be a Bayesian Nash equilibrium for each Gstage2(q). Then the following distribution over pure strategies is a correlated equilibrium for G: ψ(q, f) := φ(q) Y i∈{i,ii} Y s∈S Pr h fi(q, s) ← hq,alg i (s) i . Thus to find a correlated equilibrium in an observable-query Socratic game with strategically zero-sum worlds, we need only algorithm BNE from Theorem 5.3 along with an efficient algorithm for finding a correlated equilibrium in a general game. Such an algorithm exists (the definition of correlated equilibria can be directly translated into an LP [3]), and therefore we have the following theorem: Theorem 5.6. We can provide both efficient oracle access and efficient sampling access to a correlated equilibrium for any observable-query two-player Socratic game with strategically zero-sum worlds. Because the support of the correlated equilibrium may be exponentially large, providing oracle and sampling access is the natural way to represent the correlated equilibrium. By Lemma 5.5, we can also compute correlated equilibria in any observable-query Socratic game for which Nash equilibria are computable in the induced Gstage2(q) games (e.g., when Gstage2(q) is of constant size). Another potentially interesting model of queries in Socratic games is what one might call public queries, in which both the choice and outcome of a player``s query is observable by all players in the game. (This model might be most appropriate in the presence of corporate espionage or media leaks, or in a setting in which the queries-and thus their results-are done in plain view.) The techniques that we have developed in this section also yield exactly the same results as for observable queries. The proof is actually simpler: with public queries, the players'' payoffs are common knowledge when Stage 2 begins, and thus Stage 2 really is a complete-information game. (There may still be uncertainty about the real world, but all players use the observed signals to infer exactly the same set of possible worlds in which wreal may lie; thus they are playing a complete-information game against each other.) Thus we have the same results as in Theorems 5.4 and 5.6 more simply, by solving Stage 2 using a (non-Bayesian) Nash-equilibrium finder and solving Stage 1 as before. Our results for observable queries are weaker than for unobservable: in Socratic games with worlds that are strategically zero sum but not constant sum, we find only a correlated equilibrium in the observable case, whereas we find a Nash equilibrium in the unobservable case. We might hope to extend our unobservable-query techniques to observable queries, but there is no obvious way to do so. The fundamental obstacle is that the LP``s payoff constraint becomes nonlinear if there is any dependence on the probability that the other player made a particular query. This dependence arises with observable queries, suggesting that observable Socratic games with strategically zero-sum worlds may be harder to solve. 6. RELATED WORK Our work was initially motivated by research in the social sciences indicating that real people seem (irrationally) paralyzed when they are presented with additional options. In this section, we briefly review some of these social-science experiments and then discuss technical approaches related to Socratic game theory. Prima facie, a rational agent``s happiness given an added option can only increase. However, recent research has found that more choices tend to decrease happiness: for example, students choosing among extra-credit options are more likely to do extra credit if given a small subset of the choices and, moreover, produce higher-quality work [35]. (See also [19].) The psychology literature explores a number of explanations: people may miscalculate their opportunity cost by comparing their choice to a component-wise maximum of all other options instead of the single best alternative [65], a new option may draw undue attention to aspects of the other options [67], and so on. The present work explores an economic explanation of this phenomenon: information is not free. When there are more options, a decision-maker must spend more time to achieve a satisfactory outcome. See, e.g., the work of Skyrms [68] for a philosophical perspective on the role of deliberation in strategic situations. Finally, we note the connection between Socratic games and modal logic [34], a formalism for the logic of possibility and necessity. The observation that human players typically do not play rational strategies has inspired some attempts to model partially rational players. The typical model of this socalled bounded rationality [36, 64, 66] is to postulate bounds on computational power in computing the consequences of a strategy. The work on bounded rationality [23, 24, 53, 58] differs from the models that we consider here in that instead of putting hard limitations on the computational power of the agents, we instead restrict their a priori knowledge of the state of the world, requiring them to spend time (and therefore money/utility) to learn about it. Partially observable stochastic games (POSGs) are a general framework used in AI to model situations of multi-agent planning in an evolving, unknown environment, but the generality of POSGs seems to make them very difficult [6]. Recent work has been done in developing algorithms for restricted classes of POSGs, most notably classes of cooperative POSGs-e.g., [20, 30]-which are very different from the competitive strategically zero-sum games we address in this paper. The fundamental question in Socratic games is deciding on the comparative value of making a more costly but more informative query, or concluding the data-gathering phase and picking the best option, given current information. This tradeoff has been explored in a variety of other contexts; a sampling of these contexts includes aggregating results 156 from delay-prone information sources [8], doing approximate reasoning in intelligent systems [72], deciding when to take the current best guess of disease diagnosis from a beliefpropagation network and when to let it continue inference [33], among many others. This issue can also be viewed as another perspective on the general question of exploration versus exploitation that arises often in AI: when is it better to actively seek additional information instead of exploiting the knowledge one already has? (See, e.g., [69].) Most of this work differs significantly from our own in that it considers single-agent planning as opposed to the game-theoretic setting. A notable exception is the work of Larson and Sandholm [41, 42, 43, 44] on mechanism design for interacting agents whose computation is costly and limited. They present a model in which players must solve a computationally intractable valuation problem, using costly computation to learn some hidden parameters, and results for auctions and bargaining games in this model. 7. FUTURE DIRECTIONS Efficiently finding Nash equilibria in Socratic games with non-strategically zero-sum worlds is probably difficult because the existence of such an algorithm for classical games has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54, 55]. There has, however, been some algorithmic success in finding Nash equilibria in restricted classical settings (e.g., [21, 46, 47, 57]); we might hope to extend our results to analogous Socratic games. An efficient algorithm to find correlated equilibria in general Socratic games seems more attainable. Suppose the players receive recommended queries and responses. The difficulty is that when a player considers a deviation from his recommended query, he already knows his recommended response in each of the Stage-2 games. In a correlated equilibrium, a player``s expected payoff generally depends on his recommended strategy, and thus a player may deviate in Stage 1 so as to land in a Stage-2 game where he has been given a better than average recommended response. (Socratic games are succinct games of superpolynomial type, so Papadimitriou``s results [56] do not imply correlated equilibria for them.) Socratic games can be extended to allow players to make adaptive queries, choosing subsequent queries based on previous results. Our techniques carry over to O(1) rounds of unobservable queries, but it would be interesting to compute equilibria in Socratic games with adaptive observable queries or with ω(1) rounds of unobservable queries. Special cases of adaptive Socratic games are closely related to single-agent problems like minimum latency [1, 7, 26], determining strategies for using priced information [9, 29, 37], and an online version of minimum test cover [18, 50]. Although there are important technical distinctions between adaptive Socratic games and these problems, approximation techniques from this literature may apply to Socratic games. The question of approximation raises interesting questions even in non-adaptive Socratic games. An -approximate Nash equilibrium is a strategy profile α so that no player can increase her payoff by an additive by deviating from α. Finding approximate Nash equilibria in both adaptive and non-adaptive Socratic games is an interesting direction to pursue. Another natural extension is the model where query results are stochastic. In this paper, we model a query as deterministically partitioning the possible worlds into subsets that the query cannot distinguish. However, one could instead model a query as probabilistically mapping the set of possible worlds into the set of signals. With this modification, our unobservable-query model becomes equivalent to the model of Bergemann and V¨alim¨aki [4, 5], in which the result of a query is a posterior distribution over the worlds. Our techniques allow us to compute equilibria in such a stochastic-query model provided that each query is represented as a table that, for each world/signal pair, lists the probability that the query outputs that signal in that world. It is also interesting to consider settings in which the game``s queries are specified by a compact representation of the relevant probability distributions. (For example, one might consider a setting in which the algorithm has only a sampling oracle for the posterior distributions envisioned by Bergemann and V¨alim¨aki.) Efficiently finding equilibria in such settings remains an open problem. Another interesting setting for Socratic games is when the set Q of available queries is given by Q = P(Γ)-i.e., each player chooses to make a set q ∈ P(Γ) of queries from a specified groundset Γ of queries. Here we take the query cost to be a linear function, so that δ(q) = P γ∈q δ({γ}). Natural groundsets include comparison queries (if my opponent is playing strategy aii, would I prefer to play ai or ˆai?) , strategy queries (what is my vector of payoffs if I play strategy ai?) , and world-identity queries (is the world w ∈ W the real world?) . When one can infer a polynomial bound on the number of queries made by a rational player, then our results yield efficient solutions. (For example, we can efficiently solve games in which every groundset element γ ∈ Γ has δ({γ}) = Ω(M − M), where M and M denote the maximum and minimum payoffs to any player in any world.) Conversely, it is NP-hard to compute a Nash equilibrium for such a game when every δ({γ}) ≤ 1/|W|2 , even when the worlds are constant sum and Player II has only a single available strategy. Thus even computing a best response for Player I is hard. (This proof proceeds by reduction from set cover; intuitively, for sufficiently low query costs, Player I must fully identify the actual world through his queries. Selecting a minimum-sized set of these queries is hard.) Computing Player I``s best response can be viewed as maximizing a submodular function, and thus a best response can be (1 − 1/e) ≈ 0.63 approximated greedily [14]. An interesting open question is whether this approximate best-response calculation can be leveraged to find an approximate Nash equilibrium. 8. ACKNOWLEDGEMENTS Part of this work was done while all authors were at MIT CSAIL. We thank Erik Demaine, Natalia Hernandez Gardiol, Claire Monteleoni, Jason Rennie, Madhu Sudan, and Katherine White for helpful comments and discussions. 9. REFERENCES [1] Aaron Archer and David P. Williamson. Faster approximation algorithms for the minimum latency problem. In Proceedings of the Symposium on Discrete Algorithms, pages 88-96, 2003. [2] R. J. Aumann. Subjectivity and correlation in randomized strategies. J. Mathematical Economics, 1:67-96, 1974. 157 [3] Robert J. Aumann. Correlated equilibrium as an expression of Bayesian rationality. Econometrica, 55(1):1-18, January 1987. [4] Dick Bergemann and Juuso V¨alim¨aki. Information acquisition and efficient mechanism design. Econometrica, 70(3):1007-1033, May 2002. [5] Dick Bergemann and Juuso V¨alim¨aki. Information in mechanism design. Technical Report 1532, Cowles Foundation for Research in Economics, 2005. [6] Daniel S. Bernstein, Shlomo Zilberstein, and Neil Immerman. The complexity of decentralized control of Markov Decision Processes. Mathematics of Operations Research, pages 819-840, 2002. [7] Avrim Blum, Prasad Chalasani, Don Coppersmith, Bill Pulleyblank, Prabhakar Raghavan, and Madhu Sudan. The minimum latency problem. In Proceedings of the Symposium on the Theory of Computing, pages 163-171, 1994. [8] Andrei Z. Broder and Michael Mitzenmacher. Optimal plans for aggregation. In Proceedings of the Principles of Distributed Computing, pages 144-152, 2002. [9] Moses Charikar, Ronald Fagin, Venkatesan Guruswami, Jon Kleinberg, Prabhakar Raghavan, and Amit Sahai. Query strategies for priced information. J. Computer and System Sciences, 64(4):785-819, June 2002. [10] Xi Chen and Xiaotie Deng. 3-NASH is PPAD-complete. In Electronic Colloquium on Computational Complexity, 2005. [11] Xi Chen and Xiaotie Deng. Settling the complexity of 2-player Nash-equilibrium. In Electronic Colloquium on Computational Complexity, 2005. [12] Olivier Compte and Philippe Jehiel. Auctions and information acquisition: Sealed-bid or dynamic formats? Technical report, Centre d``Enseignement et de Recherche en Analyse Socio-´economique, 2002. [13] Vincent Conitzer and Tuomas Sandholm. Complexity results about Nash equilibria. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 765-771, 2003. [14] Gerard Cornuejols, Marshall L. Fisher, and George L. Nemhauser. Location of bank accounts to optimize float: An analytic study of exact and approximate algorithms. Management Science, 23(8), April 1977. [15] Jacques Cr´emer and Fahad Khalil. Gathering information before signing a contract. American Economic Review, 82:566-578, 1992. [16] Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou. The complexity of computing a Nash equilbrium. In Electronic Colloquium on Computational Complexity, 2005. [17] Konstantinos Daskalakis and Christos H. Papadimitriou. Three-player games are hard. In Electronic Colloquium on Computational Complexity, 2005. [18] K. M. J. De Bontridder, B. V. Halld´orsson, M. M. Halld´orsson, C. A. J. Hurkens, J. K. Lenstra, R. Ravi, and L. Stougie. Approximation algorithms for the test cover problem. Mathematical Programming, 98(1-3):477-491, September 2003. [19] Ap Dijksterhuis, Maarten W. Bos, Loran F. Nordgren, and Rick B. van Baaren. On making the right choice: The deliberation-without-attention effect. Science, 311:1005-1007, 17 February 2006. [20] Rosemary Emery-Montemerlo, Geoff Gordon, Jeff Schneider, and Sebastian Thrun. Approximate solutions for partially observable stochastic games with common payoffs. In Autonomous Agents and Multi-Agent Systems, 2004. [21] Alex Fabrikant, Christos Papadimitriou, and Kunal Talwar. The complexity of pure Nash equilibria. In Proceedings of the Symposium on the Theory of Computing, 2004. [22] Kyna Fong. Multi-stage Information Acquisition in Auction Design. Senior thesis, Harvard College, 2003. [23] Lance Fortnow and Duke Whang. Optimality and domination in repeated games with bounded players. In Proceedings of the Symposium on the Theory of Computing, pages 741-749, 1994. [24] Yoav Freund, Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, and Robert E. Schapire. Efficient algorithms for learning to play repeated games against computationally bounded adversaries. In Proceedings of the Foundations of Computer Science, pages 332-341, 1995. [25] Drew Fudenberg and Jean Tirole. Game Theory. MIT, 1991. [26] Michel X. Goemans and Jon Kleinberg. An improved approximation ratio for the minimum latency problem. Mathematical Programming, 82:111-124, 1998. [27] Paul W. Goldberg and Christos H. Papadimitriou. Reducibility among equilibrium problems. In Electronic Colloquium on Computational Complexity, 2005. [28] M. Grotschel, L. Lovasz, and A. Schrijver. The ellipsoid method and its consequences in combinatorial optimization. Combinatorica, 1:70-89, 1981. [29] Anupam Gupta and Amit Kumar. Sorting and selection with structured costs. In Proceedings of the Foundations of Computer Science, pages 416-425, 2001. [30] Eric A. Hansen, Daniel S. Bernstein, and Shlomo Zilberstein. Dynamic programming for partially observable stochastic games. In National Conference on Artificial Intelligence (AAAI), 2004. [31] John C. Harsanyi. Games with incomplete information played by Bayesian players. Management Science, 14(3,5,7), 1967-1968. [32] Sergiu Hart and David Schmeidler. Existence of correlated equilibria. Mathematics of Operations Research, 14(1):18-25, 1989. [33] Eric Horvitz and Geoffrey Rutledge. Time-dependent utility and action under uncertainty. In Uncertainty in Artificial Intelligence, pages 151-158, 1991. [34] G. E. Hughes and M. J. Cresswell. A New Introduction to Modal Logic. Routledge, 1996. [35] Sheena S. Iyengar and Mark R. Lepper. When choice is demotivating: Can one desire too much of a good thing? J. Personality and Social Psychology, 79(6):995-1006, 2000. [36] Ehud Kalai. Bounded rationality and strategic complexity in repeated games. Game Theory and Applications, pages 131-157, 1990. 158 [37] Sampath Kannan and Sanjeev Khanna. Selection with monotone comparison costs. In Proceedings of the Symposium on Discrete Algorithms, pages 10-17, 2003. [38] L.G. Khachiyan. A polynomial algorithm in linear programming. Dokklady Akademiia Nauk SSSR, 244, 1979. [39] Daphne Koller and Nimrod Megiddo. The complexity of two-person zero-sum games in extensive form. Games and Economic Behavior, 4:528-552, 1992. [40] Daphne Koller, Nimrod Megiddo, and Bernhard von Stengel. Efficient computation of equilibria for extensive two-person games. Games and Economic Behavior, 14:247-259, 1996. [41] Kate Larson. Mechanism Design for Computationally Limited Agents. PhD thesis, CMU, 2004. [42] Kate Larson and Tuomas Sandholm. Bargaining with limited computation: Deliberation equilibrium. Artificial Intelligence, 132(2):183-217, 2001. [43] Kate Larson and Tuomas Sandholm. Costly valuation computation in auctions. In Proceedings of the Theoretical Aspects of Rationality and Knowledge, July 2001. [44] Kate Larson and Tuomas Sandholm. Strategic deliberation and truthful revelation: An impossibility result. In Proceedings of the ACM Conference on Electronic Commerce, May 2004. [45] C. E. Lemke and J. T. Howson, Jr.. Equilibrium points of bimatrix games. J. Society for Industrial and Applied Mathematics, 12, 1964. [46] Richard J. Lipton, Evangelos Markakis, and Aranyak Mehta. Playing large games using simple strategies. In Proceedings of the ACM Conference on Electronic Commerce, pages 36-41, 2003. [47] Michael L. Littman, Michael Kearns, and Satinder Singh. An efficient exact algorithm for singly connected graphical games. In Proceedings of Neural Information Processing Systems, 2001. [48] Steven A. Matthews and Nicola Persico. Information acquisition and the excess refund puzzle. Technical Report 05-015, Department of Economics, University of Pennsylvania, March 2005. [49] Richard D. McKelvey and Andrew McLennan. Computation of equilibria in finite games. In H. Amman, D. A. Kendrick, and J. Rust, editors, Handbook of Compututational Economics, volume 1, pages 87-142. Elsevier, 1996. [50] B.M.E. Moret and H. D. Shapiro. On minimizing a set of tests. SIAM J. Scientific Statistical Computing, 6:983-1003, 1985. [51] H. Moulin and J.-P. Vial. Strategically zero-sum games: The class of games whose completely mixed equilibria cannot be improved upon. International J. Game Theory, 7(3/4), 1978. [52] John F. Nash, Jr.. Equilibrium points in n-person games. Proceedings of the National Academy of Sciences, 36:48-49, 1950. [53] Abraham Neyman. Finitely repeated games with finite automata. Mathematics of Operations Research, 23(3):513-552, August 1998. [54] Christos Papadimitriou. On the complexity of the parity argument and other inefficient proofs of existence. J. Computer and System Sciences, 48:498-532, 1994. [55] Christos Papadimitriou. Algorithms, games, and the internet. In Proceedings of the Symposium on the Theory of Computing, pages 749-753, 2001. [56] Christos H. Papadimitriou. Computing correlated equilibria in multi-player games. In Proceedings of the Symposium on the Theory of Computing, 2005. [57] Christos H. Papadimitriou and Tim Roughgarden. Computing equilibria in multiplayer games. In Proceedings of the Symposium on Discrete Algorithms, 2005. [58] Christos H. Papadimitriou and Mihalis Yannakakis. On bounded rationality and computational complexity. In Proceedings of the Symposium on the Theory of Computing, pages 726-733, 1994. [59] David C. Parkes. Auction design with costly preference elicitation. Annals of Mathematics and Artificial Intelligence, 44:269-302, 2005. [60] Nicola Persico. Information acquisition in auctions. Econometrica, 68(1):135-148, 2000. [61] Jean-Pierre Ponssard and Sylvain Sorin. The LP formulation of finite zero-sum games with incomplete information. International J. Game Theory, 9(2):99-105, 1980. [62] Eric Rasmussen. Strategic implications of uncertainty over one``s own private value in auctions. Technical report, Indiana University, 2005. [63] Leonardo Rezende. Mid-auction information acquisition. Technical report, University of Illinois, 2005. [64] Ariel Rubinstein. Modeling Bounded Rationality. MIT, 1988. [65] Barry Schwartz. The Paradox of Choice: Why More is Less. Ecco, 2004. [66] Herbert Simon. Models of Bounded Rationality. MIT, 1982. [67] I. Simonson and A. Tversky. Choice in context: Tradeoff contrast and extremeness aversion. J. Marketing Research, 29:281-295, 1992. [68] Brian Skyrms. Dynamic models of deliberation and the theory of games. In Proceedings of the Theoretical Aspects of Rationality and Knowledge, pages 185-200, 1990. [69] Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT, 1998. [70] John von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton, 1957. [71] Bernhard von Stengel. Computing equilibria for two-person games. In R. J. Aumann and S. Hart, editors, Handbook of Game Theory with Econonic Applications, volume 3, pages 1723-1759. Elsevier, 2002. [72] S. Zilberstein and S. Russell. Approximate reasoning using anytime algorithms. In S. Natarajan, editor, Imprecise and Approximate Computation. Kluwer, 1995. 159
Playing Games in Many Possible Worlds ABSTRACT In traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game--either full omniscient knowledge or partial but fixed information. In real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences. In this paper, we model this phenomenon. We imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting "Socratic" game theory. In a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world. Players can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action. We consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made. The results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players. When the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable - and unobservable-query models. When the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games. 1. INTRODUCTION Late October 1960. A smoky room. Democratic Party strategists huddle around a map. How should the Kennedy campaign allocate its remaining advertising budget? Should it focus on, say, California or New York? The Nixon campaign faces the same dilemma. Of course, neither campaign knows the effectiveness of its advertising in each state. Perhaps Californians are susceptible to Nixon's advertising, but are unresponsive to Kennedy's. In light of this uncertainty, the Kennedy campaign may conduct a survey, at some cost, to estimate the effectiveness of its advertising. Moreover, the larger--and more expensive--the survey, the more accurate it will be. Is the cost of a survey worth the information that it provides? How should one balance the cost of acquiring more information against the risk of playing a game with higher uncertainty? In this paper, we model situations of this type as Socratic games. As in traditional game theory, the players in a Socratic game choose actions to maximize their payoffs, but we model players with incomplete information who can make costly queries to reduce their uncertainty about the state of the world before they choose their actions. This approach contrasts with traditional game theory, in which players are usually modeled as having fixed, exogenously given information about the structure of the game and its payoffs. (In traditional games of incomplete and imperfect information, there is information that the players do not have; in Socratic games, unlike in these games, the players have a chance to acquire the missing information, at some cost.) A number of related models have been explored by economists and computer scientists motivated by similar situations, often with a focus on mechanism design and auctions; a sampling of this research includes the work of Larson and Sandholm [41, 42, 43, 44], Parkes [59], Fong [22], Compte and Jehiel [12], Rezende [63], Persico and Matthews [48, 60], Cr ´ emer and Khalil [15], Rasmusen [62], and Bergemann and V ¨ alim ¨ aki [4, 5]. The model of Bergemann and V ¨ alim ¨ aki is similar in many regards to the one that we explore here; see Section 7 for some discussion. A Socratic game proceeds as follows. A real world is cho sen randomly from a set of possible worlds according to a common prior distribution. Each player then selects an arbitrary query from a set of available costly queries and receives a corresponding piece of information about the real world. Finally each player selects an action and receives a payoff--a function of the players' selected actions and the identity of the real world--less the cost of the query that he or she made. Compared to traditional game theory, the distinguishing feature of our model is the introduction of explicit costs to the players for learning arbitrary partial information about which of the many possible worlds is the real world. Our research was initially inspired by recent results in psychology on decision making, but it soon became clear that Socratic game theory is also a general tool for understanding the "exploitation versus exploration" tradeoff, well studied in machine learning, in a strategic multiplayer environment. This tension between the risk arising from uncertainty and the cost of acquiring information is ubiquitous in economics, political science, and beyond. Our results. We consider Socratic games under two models: an unobservable-query model where players learn only the response to their own queries and an observable-query model where players also learn which queries their opponents made. We give efficient algorithms to find Nash equilibria--i.e., tuples of strategies from which no player has unilateral incentive to deviate--in broad classes of two-player Socratic games in both models. Our first result is an efficient algorithm to find Nash equilibria in unobservable-query Socratic games with constant-sum worlds, in which the sum of the players' payoffs is independent of their actions. Our techniques also yield Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds. Strategically zero-sum games generalize constant-sum games by allowing the sum of the players' payoffs to depend on individual players' choices of strategy, but not on any interaction of their choices. Our second result is an efficient algorithm to find Nash equilibria in observable-query Socratic games with constant-sum worlds. Finally, we give an efficient algorithm to find correlated equilibria--a weaker but increasingly well-studied solution concept for games [2, 3, 32, 56, 57]--in observable-query Socratic games with strategically zero-sum worlds. Like all games, Socratic games can be viewed as a special case of extensive-form games, which represent games by trees in which internal nodes represent choices made by chance or by the players, and the leaves represent outcomes that correspond to a vector of payoffs to the players. Algorithmically, the generality of extensive-form games makes them difficult to solve efficiently, and the special cases that are known to be efficiently solvable do not include even simple Socratic games. Every (complete-information) classical game is a trivial Socratic game (with a single possible world and a single trivial query), and efficiently finding Nash equilibria in classical games has been shown to be hard [10, 11, 13, 16, 17, 27, 54, 55]. Therefore we would not expect to find a straightforward polynomial-time algorithm to compute Nash equilibria in general Socratic games. However, it is well known that Nash equilibria can be found efficiently via an LP for two-player constant-sum games [49, 71] (and strategically zero-sum games [51]). A Socratic game is itself a classical game, so one might hope that these results can be applied to Socratic games with constant-sum (or strategically zero-sum) worlds. We face two major obstacles in extending these classical results to Socratic games. First, a Socratic game with constant-sum worlds is not itself a constant-sum classical game--rather, the resulting classical game is only strategically zero sum. Worse yet, a Socratic game with strategically zero-sum worlds is not itself classically strategically zero sum--indeed, there are no known efficient algorithmic techniques to compute Nash equilibria in the resulting class of classical games. (Exponential-time algorithms like Lemke/Howson, of course, can be used [45].) Thus even when it is easy to find Nash equilibria in each of the worlds of a Socratic game, we require new techniques to solve the Socratic game itself. Second, even when the Socratic game itself is strategically zero sum, the number of possible strategies available to each player is exponential in the natural representation of the game. As a result, the standard linear programs for computing equilibria have an exponential number of variables and an exponential number of constraints. For unobservable-query Socratic games with strategically zero-sum worlds, we address these obstacles by formulating a new LP that uses only polynomially many variables (though still an exponential number of constraints) and then use ellipsoid-based techniques to solve it. For observablequery Socratic games, we handle the exponentiality by decomposing the game into stages, solving the stages separately, and showing how to reassemble the solutions efficiently. To solve the stages, it is necessary to find Nash equilibria in Bayesian strategically zero-sum games, and we give an explicit polynomial-time algorithm to do so. 2. GAMES AND SOCRATIC GAMES In this section, we review background on game theory and formally introduce Socratic games. We present these models in the context of two-player games, but the multiplayer case is a natural extension. Throughout the paper, boldface variables will be used to denote a pair of variables (e.g., a = (aI, aII)). Let Pr [x +--π] denote the probability that a particular value x is drawn from the distribution π, and let Ex ∼ π [g (x)] denote the expectation of g (x) when x is drawn from π. 2.1 Background on Game Theory Consider two players, Player I and Player II, each of whom is attempting to maximize his or her utility (or payoff). A (two-player) game is a pair (A, u), where, for i E {I, II}, • Ai is the set of pure strategies for Player i, and A = (AI, AII); and • ui: A--+ R is the utility function for Player i, and u = (uI, uII). We require that A and u be common knowledge. If each Player i chooses strategy ai E Ai, then the payoffs to Players I and II are uI (a) and uII (a), respectively. A game is constant sum if, for all a E A, we have that uI (a) + uII (a) = c for some fixed c independent of a. Player i can also play a mixed strategy αi E Ai, where Ai denotes the space of probability measures over the set Ai. αI (aI) · αII (aII) denotes the joint probability of the independent events that each Player i chooses action ai from the distribution αi. This generalization to mixed strategies is known as von Neumann/Morgenstern utility [70], in which players are indifferent between a guaranteed payoff x and an expected payoff of x. A Nash equilibrium is a pair α of mixed strategies so that neither player has an incentive to change his or her strategy unilaterally. Formally, the strategy pair α is a Nash equilibrium if and only if both uI (αI, αII) = maxα ~ IEAI uI (α ~ I, αII) and uII (αI, αII) = maxα ~ IIEAII uII (αI, αII); that is, the strategies αI and αII are mutual best responses. A correlated equilibrium is a distribution ψ over A that obeys the following: if a ∈ A is drawn randomly according to ψ and Player i learns ai, then no Player i has incentive to deviate unilaterally from playing ai. (A Nash equilibrium is a correlated equilibrium in which ψ (a) = αI (aI) · αII (aII) is a product distribution.) Formally, in a correlated equilibrium, for every a ∈ A we must have that aI is a best response to a randomly chosen ˆaII ∈ AII drawn according to ψ (aI, ˆaII), and the analogous condition must hold for Player II. 2.2 Socratic Games In this section, we formally define Socratic games. A Socratic game is a 7-tuple ~ A, W, ~ u, S, Q, p, δ ~, where, fori ∈ {I, II}: • Ai is, as before, the set of pure strategies for Player i. • W is a set of possible worlds, one of which is the real world wreal. • ~ ui = {uwi: A → R | w ∈ W} is a set of payoff functions for Player i, one for each possible world. • S is a set of signals. • Qi is a set of available queries for Player i. When Player i makes query qi: W → S, he or she receives the signal qi (wreal). When Player i receives signal qi (wreal) in response to query qi, he or she can infer that wreal ∈ {w: qi (w) = qi (wreal)}, i.e., the set of possible worlds from which query qi cannot distinguish wreal. • p: W → [0, 1] is a probability distribution over the possible worlds. • δi: Qi → R ≥ 0 gives the query cost for each available query for Player i. Initially, the world wreal is chosen according to the probability distribution p, but the identity of wreal remains unknown to the players. That is, it is as if the players are playing the game ~ A, uwreal ~ but do not know wreal. The players make queries q ∈ Q, and Player i receives the signal qi (wreal). We consider both observable queries and unobservable queries. When queries are observable, each player learns which query was made by the other player, and the results of his or her own query--that is, each Player i learns qI, qII, and qi (wreal). For unobservable queries, Player i learns only qi and qi (wreal). After learning the results of the queries, the players select strategies a ∈ A and receive as payoffs (a) − δi (qi). In the Socratic game, a pure strategy for Player i consists of a query qi ∈ Qi and a response function mapping any result of the query qi to a strategy ai ∈ Ai to play. A player's state of knowledge after a query is a point in R: = Q × S or Ri: = Qi × S for observable or unobservable queries, respectively. Thus Player i's response function maps R or Ri to Ai. Note that the number of pure strategies is exponential, as there are exponentially many response functions. A mixed strategy involves both randomly choosing a query qi ∈ Qi and randomly choosing an action ai ∈ Ai in response to the results of the query. Formally, we will consider a mixed-strategy-function profile f = ~ fquery, fresp ~ to have two parts: • a function fquery probability that Player i makes query qi. • a function fresp i that maps R or Ri to a probability distribution over actions. Player i chooses an action ai ∈ Ai according to the probability distribution ffresp i (q, qi (w)) for observable queries, and according to resp i (qi, qi (w)) for unobservable queries. (With unobservable queries, for example, the probability that Player I plays action aI conditioned on making query qI in world w is given by Pr [aI ← fresp I (qI, qI (w))].) Mixed strategies are typically defined as probability distributions over the pure strategies, but here we represent a mixed strategy by a pair ~ f query, fresp ~, which is commonly referred to as a "behavioral" strategy in the game-theory literature. As in any game with perfect recall, one can easily map a mixture of pure strategies to a behavioral strategy f = ~ fquery, fresp ~ that induces the same probability of making a particular query qi or playing a particular action after making a query qi in a particular world. Thus it suffices to consider only this representation of mixed strategies. For a strategy-function profile f for observable queries, the (expected) payoff to Player i is given by The payoffs for unobservable queries are analogous, with fresp j (qj, qj (w)) in place of fresp j (q, qj (w)). 3. STRATEGICALLY ZERO-SUM GAMES We can view a Socratic game G with constant-sum worlds as an exponentially large classical game, with pure strategies "make query qi and respond according to fi." However, this classical game is not constant sum. The sum of the players' payoffs varies depending upon their strategies, because different queries incur different costs. However, this game still has significant structure: the sum of payoffs varies only because of varying query costs. Thus the sum of payoffs does depend on players' choice of strategies, but not on the interaction of their choices--i.e., for fixed functions gI and gII, we have uI (q, f) + uII (q, f) = gI (qI, fI) + gII (qII, fII) for all strategies ~ q, f ~. Such games are called strategically zero sum and were introduced by Moulin and Vial [51], who describe a notion of strategic equivalence and define strategically zero-sum games as those strategically equivalent to zero-sum games. It is interesting to note that two Socratic games with the same queries and strategically equivalent worlds are not necessarily strategically equivalent. A game ~ A, u ~ is strategically zero sum if there exist labels ~ (i, ai) for every Player i and every pure strategy ai ∈ Ai uwreal i such that, for all mixed-strategy profiles α, we have that the sum of the utilities satisfies Note that any constant-sum game is strategically zero sum as well. It is not immediately obvious that one can efficiently decide if a given game is strategically zero sum. For completeness, we give a characterization of classical strategically zero-sum games in terms of the rank of a simple matrix derived from the game's payoffs, allowing us to efficiently decide if a given game is strategically zero sum and, if it is, to compute the labels ~ (i, ai). THEOREM 3.1. Consider a game G = ~ A, u ~ with Ai = {a1i,..., ani i}. Let MG be the nI-by-nII matrix whose ~ i, j ~ th entry MG (i, j) satisfies log2 MG (i, j) = uI (aiI, ajII) + uII (aiI, aj II). Then the following are equivalent: (i) G is strategically zero sum; (ii) there exist labels ~ (i, ai) for every player i ∈ {I, II} and every pure strategy ai ∈ Ai such that, for all pure strategies a ∈ A, we have uI (a) + uII (a) = ~ (I, aI) + ~ (II, aII); and (iii) rank (MG) = 1. PROOF SKETCH. (i ⇒ ii) is immediate; every pure strategy is a trivially mixed strategy. For (ii ⇒ iii), let ~ ci be the n-element column vector with jth component 2 ~ (i, aji); then ~ cI · ~ cIIT = MG. For (iii ⇒ i), if rank (MG) = 1, then MG = u · vT. . We can prove that G is strategically zero sum by choosing labels ~ (I, ajI): = log2 uj and ~ (II, ajII): = log2 vj. 4. SOCRATIC GAMES WITH UNOBSERVABLE QUERIES We begin with Socratic games with unobservable queries, where a player's choice of query is not revealed to her opponent. We give an efficient algorithm to solve unobservablequery Socratic games with strategically zero-sum worlds. Our algorithm is based upon the LP shown in Figure 1, whose feasible points are Nash equilibria for the game. The LP has polynomially many variables but exponentially many constraints. We give an efficient separation oracle for the LP, implying that the ellipsoid method [28, 38] yields an efficient algorithm. This approach extends the techniques of Koller and Megiddo [39] (see also [40]) to solve constant-sum games represented in extensive form. (Recall that their result does not directly apply in our case; even a Socratic game with constant-sum worlds is not a constant-sum classical game.) PROOF SKETCH. We begin with a description of the correspondence between feasible points for the LP and Nash equilibria for G. First, suppose that strategy profile f = ~ f query, fresp ~ forms a Nash equilibrium for G. Then the following setting for the LP variables is feasible: (We omit the straightforward calculations that verify feasibility.) Next, suppose ~ xiai, qi, w, yiqi, ρi ~ is feasible for the LP. Let f be the strategy-function profile defined as ffquery: qi ~ → yi i qi resp i (qi, qi (w)): ai ~ → xiai, qi, w/yi qi. Verifying that this strategy profile is a Nash equilibrium requires checking that fresp i (qi, qi (w)) is a well-defined function (from constraint VI), that fquery i and fresp i (qi, qi (w)) are probability distributions (from constraints III and IV), and that each player is playing a best response to his or her opponent's strategy (from constraints I and II). Finally, from constraints I and II, the expected payoff to Player i is at most ρi. Because the right-hand side of constraint VII is equal to the expected sum of the payoffs from f and is at most ρI + ρII, the payoffs are correct and imply the lemma. We now give an efficient separation oracle for the LP in Figure 1, thus allowing the ellipsoid method to solve the LP in polynomial time. Recall that a separation oracle is a function that, given a setting for the variables in the LP, either returns "feasible" or returns a particular constraint of the LP that is violated by that setting of the variables. An efficient, correct separation oracle allows us to solve the LP efficiently via the ellipsoid method. LEMMA 4.2. There exists a separation oracle for the LP in Figure 1 that is correct and runs in polynomial time. PROOF. Here is a description of the separation oracle SP. On input ~ xiai, qi, w, yiqi, ρi ~: 1. Check each of the constraints (III), (IV), (V), (VI), and (VII). If any one of these constraints is violated, then return it. 2. Define the strategy profile f as follows: More specifically, given fII and the result qI (wreal) of the query qI, it is straightforward to compute the probability that, conditioned on the fact that the result of query qI is qI (w), the world is w and Player II will play action aII ∈ AII. Therefore, for each query qI and response qI (w), Player I can compute the expected utility of each pure response aI to the induced mixed strategy over AII for Player II. Player I can then select the aI maximizing this expected payoff. ˆfI be the response function such that ˆfI (qI, qI (w)) = ˆfII. "Player i does not prefer ` make query qi, then play according to the function fi"': Figure 1: An LP to find Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds. The input is a Socratic game (A, W, ~ u, S, Q, p, δ) so that world w is strategically zero sum with labels ~ (i, ai, w). Player i makes query qi E Qi with probability yi qi and, when the actual world is w E W, makes query qi and plays action ai with probability xiai, qi, w. The expected payoff to Player i is given by ρi. 3. Let ˆρqI I be the expected payoff to Player I using the strategy "make query qI and play response function ˆfI" if Player II plays according to fII. Let ˆρI = maxqIEQq ˆρqII and let ˆqI = arg maxqIEQq ˆρqI I. Similarly, define ˆρqII II, ˆρII, and ˆqII. ˆfi and ˆqi defined in Step 3, return constraint (I-ˆqI-ˆfI) or (II-ˆqII-ˆfII) if either is violated. If both are satisfied, then return "feasible." We first note that the separation oracle runs in polynomial time and then prove its correctness. Steps 1 and 4 are clearly polynomial. For Step 2, we have described how to compute the relevant response functions by examining every action of Player I, every world, every query, and every action of Player II. There are only polynomially many queries, worlds, query results, and pure actions, so the running time of Steps 2 and 3 is thus polynomial. We now sketch the proof that the separation oracle works correctly. The main challenge is to show that if any constraint (I-qI-fI) is violated then (I-ˆqI-ˆfI) is violated in Step 4. First, we observe that, by construction, the function ˆfI computed in Step 3 must be a best response to Player II playing fII, no matter what query Player I makes. Therefore the ˆfI" must be a best response to Player II playing fII, by definition of ˆqI. The right-hand side of each constraint (I-qI-fI) is equal to the expected payoff that Player I receives when playing the pure strategy "make query qI and then play response function fI" against Player II's strategy of fII. Therefore, because the pure strategy "make query ˆqI and then play response function ˆfI" is a best response to Player II playing fII, the right-hand side of constraint (I-ˆqI - ˆfI) is at least as large as the right hand side of any constraint (I-ˆqI-fi). Therefore, if any constraint (I-qI-fI) is violated, constraint (I-ˆqI - ˆfI) is also violated. An analogous argument holds for Player II. These lemmas and the well-known fact that Nash equilibria always exist [52] imply the following theorem: 5. SOCRATIC GAMES WITH OBSERVABLE QUERIES In this section, we give efficient algorithms to find (1) a Nash equilibrium for observable-query Socratic games with constant-sum worlds and (2) a correlated equilibrium in the broader class of Socratic games with strategically zero-sum worlds. Recall that a Socratic game G = (A, W, ~ u, S, Q, p, δ) with observable queries proceeds in two stages: Stage 1: The players simultaneously choose queries q E Q. Player i receives as output qI, qII, and qi (wreal). Stage 2: The players simultaneously choose strategies a E A. The payoff to Player i is uwreal Using backward induction, we first solve Stage 2 and then proceed to the Stage-1 game. For a query q E Q, we would like to analyze the Stage-2 "game" ˆGQ resulting from the players making queries q in Stage 1. Technically, however, ˆGQ is not actually a game, because at the beginning of Stage 2 the players have different information about the world: Player I knows qI (wreal), and 4. For the strategy "make query ˆqI, then play response function Player II knows q,, (wreal). Fortunately, the situation in which players have asymmetric private knowledge has been well studied in the game-theory literature. A Bayesian game is a quadruple ~ A, T, r, u ~, where: • Ai is the set of pure strategies for Player i. • Ti is the set of types for Player i. • r is a probability distribution over T; r (t) denotes the probability that Player i has type ti for all i. • ui: A × T → R is the payoff function for Player i. If the players have types t and play pure strategies a, then ui (a, t) denotes the payoff for Player i. Initially, a type t is drawn randomly from T according to the distribution r. Player i learns his type ti, but does not learn any other player's type. Player i then plays a mixed strategy αi ∈ Ai--that is, a probability distribution over Ai--and receives payoff ui (a, t). A strategy function is a function hi: Ti → Ai; Player i plays the mixed strategy hi (ti) ∈ Ai when her type is ti. A strategy-function profile h is a Bayesian Nash equilibrium if and only if no Player i has unilateral incentive to deviate from hi if the other players play according to h. For a two-player Bayesian game, if a = h (t), then the profile h is a Bayesian Nash equilibrium exactly when the following condition and its analogue for Player II hold: Et--r [u, (a, t)] = maxh,, Et--r [u, (~ h ~, (t,), α,, ~, t)]. These conditions hold if and only if, for all ti ∈ Ti occurring with positive probability, Player i's expected utility conditioned on his type being ti is maximized by hi (ti). A Bayesian game is constant sum if for all a ∈ A and all t ∈ T, we have u, (a, t) + u,, (a, t) = ct, for some constant ct independent of a. A Bayesian game is strategically zero sum if the classical game ~ A, u (·, t) ~ is strategically zero sum for every t ∈ T. Whether a Bayesian game is strategically zero sum can be determined as in Theorem 3.1. (For further discussion of Bayesian games, see [25, 31].) We now formally define the Stage-2 "game" as a Bayesian game. Given a Socratic game G = ~ A, W, ~ u, S, Q, p, 6 ~ and a query profile q ∈ Q, we define the Stage-2 Bayesian game Gstage2 (q): = ~ A, Tq, pstage2 (q), ustage2 (q) ~, where: • Ai, the set of pure strategies for Player i, is the same as in the original Socratic game; • Tiq = {qi (w): w ∈ W}, the set of types for Player i, is the set of signals that can result from query qi; • pstage2 (q) (t) = Pr [q (w) = t | w ← p]; and We now define the Stage-1 game in terms of the payoffs for the Stage-2 games. Fix any algorithm alg that finds a Bayesian Nash equilibrium hq, alg: = alg (Gstage2 (q)) for each Stage-2 game. Define valuealg i (Gstage2 (q)) to be the expected payoff received by Player i in the Bayesian game Gstage2 (q) if each player plays according to hq, alg, that is, • Astage1: = Q, the set of available queries in the Socratic game; and I.e., players choose queries q and receive payoffs corresponding to valuealg (Gstage2 (q)), less query costs. LEMMA 5.1. Consider an observable-query Socratic game G = ~ A, W, ~ u, S, Q, p, 6 ~. Let Gstage2 (q) be the Stage-2 games for all q ∈ Q, let alg be an algorithm finding a Bayesian Nash equilibrium in each Gstage2 (q), and let Galg stage1 be the Stage-1 game. Let a be a Nash equilibrium for Galg stage1, and let hq, alg: = alg (Gstage2 (q)) be a Bayesian Nash equilibrium for each Gstage2 (q). Then the following strategy profile is a Nash equilibrium for G: • In Stage 1, Player i makes query qi with probability αi (qi). (That is, set fquery (q): = a (q).) • In Stage 2, if q is the query in Stage 1 and qi (wreal) denotes the response to Player i's query, then Player i chooses action ai with probability hq, alg We now find equilibria in the stage games for Socratic games with constant - or strategically zero-sum worlds. We first show that the stage games are well structured in this setting: stage1 is strategically zero sum for every algorithm alg, and every Stage-2 game Gstage2 (q) is Bayesian constant sum. If the worlds of G are strategically zero sum, then every Gstage2 (q) is Bayesian strategically zero sum. We now show that we can efficiently compute equilibria for these well-structured stage games. THEOREM 5.3. There exists a polynomial-time algorithm BNE finding Bayesian Nash equilibria in strategically zerosum Bayesian (and thus classical strategically zero-sum or Bayesian constant-sum) two-player games. PROOF SKETCH. Let G = ~ A, T, r, u ~ be a strategically zero-sum Bayesian game. Define an unobservable-query Socratic game G * with one possible world for each t ∈ T, one available zero-cost query qi for each Player i so that qi reveals ti, and all else as in G. Bayesian Nash equilibria in G correspond directly to Nash equilibria in G *, and the worlds of G * are strategically zero sum. Thus by Theorem 4.3 we can compute Nash equilibria for G *, and thus we can compute Bayesian Nash equilibria for G. (LP's for zero-sum two-player Bayesian games have been previously developed and studied [61].) THEOREM 5.4. We can compute a Nash equilibrium for an arbitrary two-player observable-query Socratic game G = ~ A, W, ~ u, S, Q, p, 6 ~ with constant-sum worlds in polynomial time. PROOF. Because each world of G is constant sum, Lemma 5.2 implies that the induced Stage-2 games Gstage2 (q) are all Bayesian constant sum. Thus we can use algorithm BNE to compute a Bayesian Nash equilibrium hq, BNE: = BNE (Gstage2 (q)) for each q ∈ Q, by Theorem 5.3. Furthermore, again by Lemma 5.2, the induced Stage-1 game GBNE stage1 is classical strategically zero sum. Therefore we can again use algorithm BNE to compute a Nash equilibrium a: = BNE (GBNE stage1), again by Theorem 5.3. Therefore, by Lemma 5.1, we can assemble a and the hq, BNE's into a Nash equilibrium for the Socratic game G. We would like to extend our results on observable-query Socratic games to Socratic games with strategically zerosum worlds. While we can still find Nash equilibria in the Stage-2 games, the resulting Stage-1 game is not in general strategically zero sum. Thus, finding Nash equilibria in observable-query Socratic games with strategically zerosum worlds seems to require substantially new techniques. However, our techniques for decomposing observable-query Socratic games do allow us to find correlated equilibria in this case. LEMMA 5.5. Consider an observable-query Socratic game G = (A, W, ~ u, S, Q, p, δ). Let alg be an arbitrary algorithm that finds a Bayesian Nash equilibrium in each of the derived Stage-2 games Gstage2 (q), and let Galg stage1 be the derived Stage stage1, and let hq, alg: = alg (Gstage2 (q)) be a Bayesian Nash equilibrium for each Gstage2 (q). Then the following distribution over pure strategies is a correlated equilibrium for G: Thus to find a correlated equilibrium in an observable-query Socratic game with strategically zero-sum worlds, we need only algorithm BNE from Theorem 5.3 along with an efficient algorithm for finding a correlated equilibrium in a general game. Such an algorithm exists (the definition of correlated equilibria can be directly translated into an LP [3]), and therefore we have the following theorem: THEOREM 5.6. We can provide both efficient oracle access and efficient sampling access to a correlated equilibrium for any observable-query two-player Socratic game with strategically zero-sum worlds. Because the support of the correlated equilibrium may be exponentially large, providing oracle and sampling access is the natural way to represent the correlated equilibrium. By Lemma 5.5, we can also compute correlated equilibria in any observable-query Socratic game for which Nash equilibria are computable in the induced Gstage2 (q) games (e.g., when Gstage2 (q) is of constant size). Another potentially interesting model of queries in Socratic games is what one might call public queries, in which both the choice and outcome of a player's query is observable by all players in the game. (This model might be most appropriate in the presence of corporate espionage or media leaks, or in a setting in which the queries--and thus their results--are done in plain view.) The techniques that we have developed in this section also yield exactly the same results as for observable queries. The proof is actually simpler: with public queries, the players' payoffs are common knowledge when Stage 2 begins, and thus Stage 2 really is a complete-information game. (There may still be uncertainty about the real world, but all players use the observed signals to infer exactly the same set of possible worlds in which wreal may lie; thus they are playing a complete-information game against each other.) Thus we have the same results as in Theorems 5.4 and 5.6 more simply, by solving Stage 2 using a (non-Bayesian) Nash-equilibrium finder and solving Stage 1 as before. Our results for observable queries are weaker than for unobservable: in Socratic games with worlds that are strategically zero sum but not constant sum, we find only a correlated equilibrium in the observable case, whereas we find a Nash equilibrium in the unobservable case. We might hope to extend our unobservable-query techniques to observable queries, but there is no obvious way to do so. The fundamental obstacle is that the LP's payoff constraint becomes nonlinear if there is any dependence on the probability that the other player made a particular query. This dependence arises with observable queries, suggesting that observable Socratic games with strategically zero-sum worlds may be harder to solve. 6. RELATED WORK Our work was initially motivated by research in the social sciences indicating that real people seem (irrationally) paralyzed when they are presented with additional options. In this section, we briefly review some of these social-science experiments and then discuss technical approaches related to Socratic game theory. Prima facie, a rational agent's happiness given an added option can only increase. However, recent research has found that more choices tend to decrease happiness: for example, students choosing among extra-credit options are more likely to do extra credit if given a small subset of the choices and, moreover, produce higher-quality work [35]. (See also [19].) The psychology literature explores a number of explanations: people may miscalculate their opportunity cost by comparing their choice to a "component-wise maximum" of all other options instead of the single best alternative [65], a new option may draw undue attention to aspects of the other options [67], and so on. The present work explores an economic explanation of this phenomenon: information is not free. When there are more options, a decision-maker must spend more time to achieve a satisfactory outcome. See, e.g., the work of Skyrms [68] for a philosophical perspective on the role of deliberation in strategic situations. Finally, we note the connection between Socratic games and modal logic [34], a formalism for the logic of possibility and necessity. The observation that human players typically do not play "rational" strategies has inspired some attempts to model "partially" rational players. The typical model of this socalled bounded rationality [36, 64, 66] is to postulate bounds on computational power in computing the consequences of a strategy. The work on bounded rationality [23, 24, 53, 58] differs from the models that we consider here in that instead of putting hard limitations on the computational power of the agents, we instead restrict their a priori knowledge of the state of the world, requiring them to spend time (and therefore money/utility) to learn about it. Partially observable stochastic games (POSGs) are a general framework used in AI to model situations of multi-agent planning in an evolving, unknown environment, but the generality of POSGs seems to make them very difficult [6]. Recent work has been done in developing algorithms for restricted classes of POSGs, most notably classes of cooperative POSGs--e.g., [20, 30]--which are very different from the competitive strategically zero-sum games we address in this paper. The fundamental question in Socratic games is deciding on the comparative value of making a more costly but more informative query, or concluding the data-gathering phase and picking the best option, given current information. This tradeoff has been explored in a variety of other contexts; a sampling of these contexts includes aggregating results ψ (q, f): = 0 (q) Y Y i ∈ {I, II} s ∈ S from delay-prone information sources [8], doing approximate reasoning in intelligent systems [72], deciding when to take the current best guess of disease diagnosis from a beliefpropagation network and when to let it continue inference [33], among many others. This issue can also be viewed as another perspective on the general question of exploration versus exploitation that arises often in AI: when is it better to actively seek additional information instead of exploiting the knowledge one already has? (See, e.g., [69].) Most of this work differs significantly from our own in that it considers single-agent planning as opposed to the game-theoretic setting. A notable exception is the work of Larson and Sandholm [41, 42, 43, 44] on mechanism design for interacting agents whose computation is costly and limited. They present a model in which players must solve a computationally intractable valuation problem, using costly computation to learn some hidden parameters, and results for auctions and bargaining games in this model. 7. FUTURE DIRECTIONS Efficiently finding Nash equilibria in Socratic games with non-strategically zero-sum worlds is probably difficult because the existence of such an algorithm for classical games has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54, 55]. There has, however, been some algorithmic success in finding Nash equilibria in restricted classical settings (e.g., [21, 46, 47, 57]); we might hope to extend our results to analogous Socratic games. An efficient algorithm to find correlated equilibria in general Socratic games seems more attainable. Suppose the players receive recommended queries and responses. The difficulty is that when a player considers a deviation from his recommended query, he already knows his recommended response in each of the Stage-2 games. In a correlated equilibrium, a player's expected payoff generally depends on his recommended strategy, and thus a player may deviate in Stage 1 so as to land in a Stage-2 game where he has been given a "better than average" recommended response. (Socratic games are "succinct games of superpolynomial type," so Papadimitriou's results [56] do not imply correlated equilibria for them.) Socratic games can be extended to allow players to make adaptive queries, choosing subsequent queries based on previous results. Our techniques carry over to O (1) rounds of unobservable queries, but it would be interesting to compute equilibria in Socratic games with adaptive observable queries or with ω (1) rounds of unobservable queries. Special cases of adaptive Socratic games are closely related to single-agent problems like minimum latency [1, 7, 26], determining strategies for using priced information [9, 29, 37], and an online version of minimum test cover [18, 50]. Although there are important technical distinctions between adaptive Socratic games and these problems, approximation techniques from this literature may apply to Socratic games. The question of approximation raises interesting questions even in non-adaptive Socratic games. An e-approximate Nash equilibrium is a strategy profile α so that no player can increase her payoff by an additive e by deviating from α. Finding approximate Nash equilibria in both adaptive and non-adaptive Socratic games is an interesting direction to pursue. Another natural extension is the model where query results are stochastic. In this paper, we model a query as deterministically partitioning the possible worlds into subsets that the query cannot distinguish. However, one could instead model a query as probabilistically mapping the set of possible worlds into the set of signals. With this modification, our unobservable-query model becomes equivalent to the model of Bergemann and V ¨ alim ¨ aki [4, 5], in which the result of a query is a posterior distribution over the worlds. Our techniques allow us to compute equilibria in such a "stochastic-query" model provided that each query is represented as a table that, for each world/signal pair, lists the probability that the query outputs that signal in that world. It is also interesting to consider settings in which the game's queries are specified by a compact representation of the relevant probability distributions. (For example, one might consider a setting in which the algorithm has only a sampling oracle for the posterior distributions envisioned by Bergemann and V ¨ alim ¨ aki.) Efficiently finding equilibria in such settings remains an open problem. Another interesting setting for Socratic games is when the set Q of available queries is given by Q = P (Γ)--i.e., each player chooses to make a set q E P (Γ) of queries from a specified groundset Γ of queries. Here we take the query cost to be a linear function, so that δ (q) = Pγ ∈ q δ ({- y}). Natural groundsets include comparison queries ("if my opponent is playing strategy aII, would I prefer to play aI or ˆaI?") , strategy queries ("what is my vector of payoffs if I play strategy aI?") , and world-identity queries ("is the world w E W the real world?") . When one can infer a polynomial bound on the number of queries made by a rational player, then our results yield efficient solutions. (For example, we can efficiently solve games in which every groundset element - y E Γ has δ ({- y}) = Ω (M − M), where M and M denote the maximum and minimum payoffs to any player in any world.) Conversely, it is NP-hard to compute a Nash equilibrium for such a game when every δ ({- y}) <1 / | W | 2, even when the worlds are constant sum and Player II has only a single available strategy. Thus even computing a best response for Player I is hard. (This proof proceeds by reduction from set cover; intuitively, for sufficiently low query costs, Player I must fully identify the actual world through his queries. Selecting a minimum-sized set of these queries is hard.) Computing Player I's best response can be viewed as maximizing a submodular function, and thus a best response can be (1 − 1/e) ≈ 0.63 approximated greedily [14]. An interesting open question is whether this approximate best-response calculation can be leveraged to find an approximate Nash equilibrium.
Playing Games in Many Possible Worlds ABSTRACT In traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game--either full omniscient knowledge or partial but fixed information. In real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences. In this paper, we model this phenomenon. We imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting "Socratic" game theory. In a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world. Players can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action. We consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made. The results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players. When the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable - and unobservable-query models. When the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games. 1. INTRODUCTION Late October 1960. A smoky room. Democratic Party strategists huddle around a map. How should the Kennedy campaign allocate its remaining advertising budget? Should it focus on, say, California or New York? The Nixon campaign faces the same dilemma. Of course, neither campaign knows the effectiveness of its advertising in each state. Perhaps Californians are susceptible to Nixon's advertising, but are unresponsive to Kennedy's. In light of this uncertainty, the Kennedy campaign may conduct a survey, at some cost, to estimate the effectiveness of its advertising. Moreover, the larger--and more expensive--the survey, the more accurate it will be. Is the cost of a survey worth the information that it provides? How should one balance the cost of acquiring more information against the risk of playing a game with higher uncertainty? In this paper, we model situations of this type as Socratic games. As in traditional game theory, the players in a Socratic game choose actions to maximize their payoffs, but we model players with incomplete information who can make costly queries to reduce their uncertainty about the state of the world before they choose their actions. This approach contrasts with traditional game theory, in which players are usually modeled as having fixed, exogenously given information about the structure of the game and its payoffs. (In traditional games of incomplete and imperfect information, there is information that the players do not have; in Socratic games, unlike in these games, the players have a chance to acquire the missing information, at some cost.) A number of related models have been explored by economists and computer scientists motivated by similar situations, often with a focus on mechanism design and auctions; a sampling of this research includes the work of Larson and Sandholm [41, 42, 43, 44], Parkes [59], Fong [22], Compte and Jehiel [12], Rezende [63], Persico and Matthews [48, 60], Cr ´ emer and Khalil [15], Rasmusen [62], and Bergemann and V ¨ alim ¨ aki [4, 5]. The model of Bergemann and V ¨ alim ¨ aki is similar in many regards to the one that we explore here; see Section 7 for some discussion. A Socratic game proceeds as follows. A real world is cho sen randomly from a set of possible worlds according to a common prior distribution. Each player then selects an arbitrary query from a set of available costly queries and receives a corresponding piece of information about the real world. Finally each player selects an action and receives a payoff--a function of the players' selected actions and the identity of the real world--less the cost of the query that he or she made. Compared to traditional game theory, the distinguishing feature of our model is the introduction of explicit costs to the players for learning arbitrary partial information about which of the many possible worlds is the real world. Our research was initially inspired by recent results in psychology on decision making, but it soon became clear that Socratic game theory is also a general tool for understanding the "exploitation versus exploration" tradeoff, well studied in machine learning, in a strategic multiplayer environment. This tension between the risk arising from uncertainty and the cost of acquiring information is ubiquitous in economics, political science, and beyond. Our results. We consider Socratic games under two models: an unobservable-query model where players learn only the response to their own queries and an observable-query model where players also learn which queries their opponents made. We give efficient algorithms to find Nash equilibria--i.e., tuples of strategies from which no player has unilateral incentive to deviate--in broad classes of two-player Socratic games in both models. Our first result is an efficient algorithm to find Nash equilibria in unobservable-query Socratic games with constant-sum worlds, in which the sum of the players' payoffs is independent of their actions. Our techniques also yield Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds. Strategically zero-sum games generalize constant-sum games by allowing the sum of the players' payoffs to depend on individual players' choices of strategy, but not on any interaction of their choices. Our second result is an efficient algorithm to find Nash equilibria in observable-query Socratic games with constant-sum worlds. Finally, we give an efficient algorithm to find correlated equilibria--a weaker but increasingly well-studied solution concept for games [2, 3, 32, 56, 57]--in observable-query Socratic games with strategically zero-sum worlds. Like all games, Socratic games can be viewed as a special case of extensive-form games, which represent games by trees in which internal nodes represent choices made by chance or by the players, and the leaves represent outcomes that correspond to a vector of payoffs to the players. Algorithmically, the generality of extensive-form games makes them difficult to solve efficiently, and the special cases that are known to be efficiently solvable do not include even simple Socratic games. Every (complete-information) classical game is a trivial Socratic game (with a single possible world and a single trivial query), and efficiently finding Nash equilibria in classical games has been shown to be hard [10, 11, 13, 16, 17, 27, 54, 55]. Therefore we would not expect to find a straightforward polynomial-time algorithm to compute Nash equilibria in general Socratic games. However, it is well known that Nash equilibria can be found efficiently via an LP for two-player constant-sum games [49, 71] (and strategically zero-sum games [51]). A Socratic game is itself a classical game, so one might hope that these results can be applied to Socratic games with constant-sum (or strategically zero-sum) worlds. We face two major obstacles in extending these classical results to Socratic games. First, a Socratic game with constant-sum worlds is not itself a constant-sum classical game--rather, the resulting classical game is only strategically zero sum. Worse yet, a Socratic game with strategically zero-sum worlds is not itself classically strategically zero sum--indeed, there are no known efficient algorithmic techniques to compute Nash equilibria in the resulting class of classical games. (Exponential-time algorithms like Lemke/Howson, of course, can be used [45].) Thus even when it is easy to find Nash equilibria in each of the worlds of a Socratic game, we require new techniques to solve the Socratic game itself. Second, even when the Socratic game itself is strategically zero sum, the number of possible strategies available to each player is exponential in the natural representation of the game. As a result, the standard linear programs for computing equilibria have an exponential number of variables and an exponential number of constraints. For unobservable-query Socratic games with strategically zero-sum worlds, we address these obstacles by formulating a new LP that uses only polynomially many variables (though still an exponential number of constraints) and then use ellipsoid-based techniques to solve it. For observablequery Socratic games, we handle the exponentiality by decomposing the game into stages, solving the stages separately, and showing how to reassemble the solutions efficiently. To solve the stages, it is necessary to find Nash equilibria in Bayesian strategically zero-sum games, and we give an explicit polynomial-time algorithm to do so. 2. GAMES AND SOCRATIC GAMES 2.1 Background on Game Theory 2.2 Socratic Games 3. STRATEGICALLY ZERO-SUM GAMES 4. SOCRATIC GAMES WITH UNOBSERVABLE QUERIES 3. Let ˆρqI 5. SOCRATIC GAMES WITH OBSERVABLE QUERIES GBNE 6. RELATED WORK Our work was initially motivated by research in the social sciences indicating that real people seem (irrationally) paralyzed when they are presented with additional options. In this section, we briefly review some of these social-science experiments and then discuss technical approaches related to Socratic game theory. Prima facie, a rational agent's happiness given an added option can only increase. However, recent research has found that more choices tend to decrease happiness: for example, students choosing among extra-credit options are more likely to do extra credit if given a small subset of the choices and, moreover, produce higher-quality work [35]. (See also [19].) The psychology literature explores a number of explanations: people may miscalculate their opportunity cost by comparing their choice to a "component-wise maximum" of all other options instead of the single best alternative [65], a new option may draw undue attention to aspects of the other options [67], and so on. The present work explores an economic explanation of this phenomenon: information is not free. When there are more options, a decision-maker must spend more time to achieve a satisfactory outcome. See, e.g., the work of Skyrms [68] for a philosophical perspective on the role of deliberation in strategic situations. Finally, we note the connection between Socratic games and modal logic [34], a formalism for the logic of possibility and necessity. The observation that human players typically do not play "rational" strategies has inspired some attempts to model "partially" rational players. The typical model of this socalled bounded rationality [36, 64, 66] is to postulate bounds on computational power in computing the consequences of a strategy. The work on bounded rationality [23, 24, 53, 58] differs from the models that we consider here in that instead of putting hard limitations on the computational power of the agents, we instead restrict their a priori knowledge of the state of the world, requiring them to spend time (and therefore money/utility) to learn about it. Partially observable stochastic games (POSGs) are a general framework used in AI to model situations of multi-agent planning in an evolving, unknown environment, but the generality of POSGs seems to make them very difficult [6]. Recent work has been done in developing algorithms for restricted classes of POSGs, most notably classes of cooperative POSGs--e.g., [20, 30]--which are very different from the competitive strategically zero-sum games we address in this paper. The fundamental question in Socratic games is deciding on the comparative value of making a more costly but more informative query, or concluding the data-gathering phase and picking the best option, given current information. This tradeoff has been explored in a variety of other contexts; a sampling of these contexts includes aggregating results ψ (q, f): = 0 (q) Y Y i ∈ {I, II} s ∈ S from delay-prone information sources [8], doing approximate reasoning in intelligent systems [72], deciding when to take the current best guess of disease diagnosis from a beliefpropagation network and when to let it continue inference [33], among many others. This issue can also be viewed as another perspective on the general question of exploration versus exploitation that arises often in AI: when is it better to actively seek additional information instead of exploiting the knowledge one already has? (See, e.g., [69].) Most of this work differs significantly from our own in that it considers single-agent planning as opposed to the game-theoretic setting. A notable exception is the work of Larson and Sandholm [41, 42, 43, 44] on mechanism design for interacting agents whose computation is costly and limited. They present a model in which players must solve a computationally intractable valuation problem, using costly computation to learn some hidden parameters, and results for auctions and bargaining games in this model. 7. FUTURE DIRECTIONS Efficiently finding Nash equilibria in Socratic games with non-strategically zero-sum worlds is probably difficult because the existence of such an algorithm for classical games has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54, 55]. There has, however, been some algorithmic success in finding Nash equilibria in restricted classical settings (e.g., [21, 46, 47, 57]); we might hope to extend our results to analogous Socratic games. An efficient algorithm to find correlated equilibria in general Socratic games seems more attainable. Suppose the players receive recommended queries and responses. The difficulty is that when a player considers a deviation from his recommended query, he already knows his recommended response in each of the Stage-2 games. In a correlated equilibrium, a player's expected payoff generally depends on his recommended strategy, and thus a player may deviate in Stage 1 so as to land in a Stage-2 game where he has been given a "better than average" recommended response. (Socratic games are "succinct games of superpolynomial type," so Papadimitriou's results [56] do not imply correlated equilibria for them.) Socratic games can be extended to allow players to make adaptive queries, choosing subsequent queries based on previous results. Our techniques carry over to O (1) rounds of unobservable queries, but it would be interesting to compute equilibria in Socratic games with adaptive observable queries or with ω (1) rounds of unobservable queries. Special cases of adaptive Socratic games are closely related to single-agent problems like minimum latency [1, 7, 26], determining strategies for using priced information [9, 29, 37], and an online version of minimum test cover [18, 50]. Although there are important technical distinctions between adaptive Socratic games and these problems, approximation techniques from this literature may apply to Socratic games. The question of approximation raises interesting questions even in non-adaptive Socratic games. An e-approximate Nash equilibrium is a strategy profile α so that no player can increase her payoff by an additive e by deviating from α. Finding approximate Nash equilibria in both adaptive and non-adaptive Socratic games is an interesting direction to pursue. Another natural extension is the model where query results are stochastic. In this paper, we model a query as deterministically partitioning the possible worlds into subsets that the query cannot distinguish. However, one could instead model a query as probabilistically mapping the set of possible worlds into the set of signals. With this modification, our unobservable-query model becomes equivalent to the model of Bergemann and V ¨ alim ¨ aki [4, 5], in which the result of a query is a posterior distribution over the worlds. Our techniques allow us to compute equilibria in such a "stochastic-query" model provided that each query is represented as a table that, for each world/signal pair, lists the probability that the query outputs that signal in that world. It is also interesting to consider settings in which the game's queries are specified by a compact representation of the relevant probability distributions. (For example, one might consider a setting in which the algorithm has only a sampling oracle for the posterior distributions envisioned by Bergemann and V ¨ alim ¨ aki.) Efficiently finding equilibria in such settings remains an open problem. Another interesting setting for Socratic games is when the set Q of available queries is given by Q = P (Γ)--i.e., each player chooses to make a set q E P (Γ) of queries from a specified groundset Γ of queries. Here we take the query cost to be a linear function, so that δ (q) = Pγ ∈ q δ ({- y}). Natural groundsets include comparison queries ("if my opponent is playing strategy aII, would I prefer to play aI or ˆaI?") , strategy queries ("what is my vector of payoffs if I play strategy aI?") , and world-identity queries ("is the world w E W the real world?") . When one can infer a polynomial bound on the number of queries made by a rational player, then our results yield efficient solutions. (For example, we can efficiently solve games in which every groundset element - y E Γ has δ ({- y}) = Ω (M − M), where M and M denote the maximum and minimum payoffs to any player in any world.) Conversely, it is NP-hard to compute a Nash equilibrium for such a game when every δ ({- y}) <1 / | W | 2, even when the worlds are constant sum and Player II has only a single available strategy. Thus even computing a best response for Player I is hard. (This proof proceeds by reduction from set cover; intuitively, for sufficiently low query costs, Player I must fully identify the actual world through his queries. Selecting a minimum-sized set of these queries is hard.) Computing Player I's best response can be viewed as maximizing a submodular function, and thus a best response can be (1 − 1/e) ≈ 0.63 approximated greedily [14]. An interesting open question is whether this approximate best-response calculation can be leveraged to find an approximate Nash equilibrium.
Playing Games in Many Possible Worlds ABSTRACT In traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game--either full omniscient knowledge or partial but fixed information. In real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences. In this paper, we model this phenomenon. We imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting "Socratic" game theory. In a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world. Players can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action. We consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made. The results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players. When the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable - and unobservable-query models. When the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games. 1. INTRODUCTION Late October 1960. A smoky room. How should the Kennedy campaign allocate its remaining advertising budget? The Nixon campaign faces the same dilemma. Of course, neither campaign knows the effectiveness of its advertising in each state. Perhaps Californians are susceptible to Nixon's advertising, but are unresponsive to Kennedy's. In light of this uncertainty, the Kennedy campaign may conduct a survey, at some cost, to estimate the effectiveness of its advertising. Is the cost of a survey worth the information that it provides? How should one balance the cost of acquiring more information against the risk of playing a game with higher uncertainty? In this paper, we model situations of this type as Socratic games. As in traditional game theory, the players in a Socratic game choose actions to maximize their payoffs, but we model players with incomplete information who can make costly queries to reduce their uncertainty about the state of the world before they choose their actions. This approach contrasts with traditional game theory, in which players are usually modeled as having fixed, exogenously given information about the structure of the game and its payoffs. (In traditional games of incomplete and imperfect information, there is information that the players do not have; in Socratic games, unlike in these games, the players have a chance to acquire the missing information, at some cost.) The model of Bergemann and V ¨ alim ¨ aki is similar in many regards to the one that we explore here; see Section 7 for some discussion. A Socratic game proceeds as follows. A real world is cho sen randomly from a set of possible worlds according to a common prior distribution. Each player then selects an arbitrary query from a set of available costly queries and receives a corresponding piece of information about the real world. Finally each player selects an action and receives a payoff--a function of the players' selected actions and the identity of the real world--less the cost of the query that he or she made. Compared to traditional game theory, the distinguishing feature of our model is the introduction of explicit costs to the players for learning arbitrary partial information about which of the many possible worlds is the real world. This tension between the risk arising from uncertainty and the cost of acquiring information is ubiquitous in economics, political science, and beyond. Our results. We consider Socratic games under two models: an unobservable-query model where players learn only the response to their own queries and an observable-query model where players also learn which queries their opponents made. We give efficient algorithms to find Nash equilibria--i.e., tuples of strategies from which no player has unilateral incentive to deviate--in broad classes of two-player Socratic games in both models. Our first result is an efficient algorithm to find Nash equilibria in unobservable-query Socratic games with constant-sum worlds, in which the sum of the players' payoffs is independent of their actions. Our techniques also yield Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds. Strategically zero-sum games generalize constant-sum games by allowing the sum of the players' payoffs to depend on individual players' choices of strategy, but not on any interaction of their choices. Our second result is an efficient algorithm to find Nash equilibria in observable-query Socratic games with constant-sum worlds. Finally, we give an efficient algorithm to find correlated equilibria--a weaker but increasingly well-studied solution concept for games [2, 3, 32, 56, 57]--in observable-query Socratic games with strategically zero-sum worlds. Like all games, Socratic games can be viewed as a special case of extensive-form games, which represent games by trees in which internal nodes represent choices made by chance or by the players, and the leaves represent outcomes that correspond to a vector of payoffs to the players. Algorithmically, the generality of extensive-form games makes them difficult to solve efficiently, and the special cases that are known to be efficiently solvable do not include even simple Socratic games. Every (complete-information) classical game is a trivial Socratic game (with a single possible world and a single trivial query), and efficiently finding Nash equilibria in classical games has been shown to be hard [10, 11, 13, 16, 17, 27, 54, 55]. Therefore we would not expect to find a straightforward polynomial-time algorithm to compute Nash equilibria in general Socratic games. However, it is well known that Nash equilibria can be found efficiently via an LP for two-player constant-sum games [49, 71] (and strategically zero-sum games [51]). A Socratic game is itself a classical game, so one might hope that these results can be applied to Socratic games with constant-sum (or strategically zero-sum) worlds. We face two major obstacles in extending these classical results to Socratic games. First, a Socratic game with constant-sum worlds is not itself a constant-sum classical game--rather, the resulting classical game is only strategically zero sum. Worse yet, a Socratic game with strategically zero-sum worlds is not itself classically strategically zero sum--indeed, there are no known efficient algorithmic techniques to compute Nash equilibria in the resulting class of classical games. (Exponential-time algorithms like Lemke/Howson, of course, can be used [45].) Thus even when it is easy to find Nash equilibria in each of the worlds of a Socratic game, we require new techniques to solve the Socratic game itself. Second, even when the Socratic game itself is strategically zero sum, the number of possible strategies available to each player is exponential in the natural representation of the game. As a result, the standard linear programs for computing equilibria have an exponential number of variables and an exponential number of constraints. For unobservable-query Socratic games with strategically zero-sum worlds, we address these obstacles by formulating a new LP that uses only polynomially many variables (though still an exponential number of constraints) and then use ellipsoid-based techniques to solve it. For observablequery Socratic games, we handle the exponentiality by decomposing the game into stages, solving the stages separately, and showing how to reassemble the solutions efficiently. To solve the stages, it is necessary to find Nash equilibria in Bayesian strategically zero-sum games, and we give an explicit polynomial-time algorithm to do so. 6. RELATED WORK Our work was initially motivated by research in the social sciences indicating that real people seem (irrationally) paralyzed when they are presented with additional options. In this section, we briefly review some of these social-science experiments and then discuss technical approaches related to Socratic game theory. Prima facie, a rational agent's happiness given an added option can only increase. (See also [19].) The present work explores an economic explanation of this phenomenon: information is not free. See, e.g., the work of Skyrms [68] for a philosophical perspective on the role of deliberation in strategic situations. Finally, we note the connection between Socratic games and modal logic [34], a formalism for the logic of possibility and necessity. The observation that human players typically do not play "rational" strategies has inspired some attempts to model "partially" rational players. Partially observable stochastic games (POSGs) are a general framework used in AI to model situations of multi-agent planning in an evolving, unknown environment, but the generality of POSGs seems to make them very difficult [6]. Recent work has been done in developing algorithms for restricted classes of POSGs, most notably classes of cooperative POSGs--e.g., [20, 30]--which are very different from the competitive strategically zero-sum games we address in this paper. The fundamental question in Socratic games is deciding on the comparative value of making a more costly but more informative query, or concluding the data-gathering phase and picking the best option, given current information. (See, e.g., [69].) Most of this work differs significantly from our own in that it considers single-agent planning as opposed to the game-theoretic setting. They present a model in which players must solve a computationally intractable valuation problem, using costly computation to learn some hidden parameters, and results for auctions and bargaining games in this model. 7. FUTURE DIRECTIONS Efficiently finding Nash equilibria in Socratic games with non-strategically zero-sum worlds is probably difficult because the existence of such an algorithm for classical games has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54, 55]. There has, however, been some algorithmic success in finding Nash equilibria in restricted classical settings (e.g., [21, 46, 47, 57]); we might hope to extend our results to analogous Socratic games. An efficient algorithm to find correlated equilibria in general Socratic games seems more attainable. Suppose the players receive recommended queries and responses. The difficulty is that when a player considers a deviation from his recommended query, he already knows his recommended response in each of the Stage-2 games. In a correlated equilibrium, a player's expected payoff generally depends on his recommended strategy, and thus a player may deviate in Stage 1 so as to land in a Stage-2 game where he has been given a "better than average" recommended response. (Socratic games are "succinct games of superpolynomial type," so Papadimitriou's results [56] do not imply correlated equilibria for them.) Socratic games can be extended to allow players to make adaptive queries, choosing subsequent queries based on previous results. Our techniques carry over to O (1) rounds of unobservable queries, but it would be interesting to compute equilibria in Socratic games with adaptive observable queries or with ω (1) rounds of unobservable queries. Special cases of adaptive Socratic games are closely related to single-agent problems like minimum latency [1, 7, 26], determining strategies for using priced information [9, 29, 37], and an online version of minimum test cover [18, 50]. Although there are important technical distinctions between adaptive Socratic games and these problems, approximation techniques from this literature may apply to Socratic games. The question of approximation raises interesting questions even in non-adaptive Socratic games. An e-approximate Nash equilibrium is a strategy profile α so that no player can increase her payoff by an additive e by deviating from α. Finding approximate Nash equilibria in both adaptive and non-adaptive Socratic games is an interesting direction to pursue. Another natural extension is the model where query results are stochastic. In this paper, we model a query as deterministically partitioning the possible worlds into subsets that the query cannot distinguish. However, one could instead model a query as probabilistically mapping the set of possible worlds into the set of signals. With this modification, our unobservable-query model becomes equivalent to the model of Bergemann and V ¨ alim ¨ aki [4, 5], in which the result of a query is a posterior distribution over the worlds. Our techniques allow us to compute equilibria in such a "stochastic-query" model provided that each query is represented as a table that, for each world/signal pair, lists the probability that the query outputs that signal in that world. It is also interesting to consider settings in which the game's queries are specified by a compact representation of the relevant probability distributions. Efficiently finding equilibria in such settings remains an open problem. Another interesting setting for Socratic games is when the set Q of available queries is given by Q = P (Γ)--i.e., each player chooses to make a set q E P (Γ) of queries from a specified groundset Γ of queries. Here we take the query cost to be a linear function, so that δ (q) = Pγ ∈ q δ ({- y}). Natural groundsets include comparison queries ("if my opponent is playing strategy aII, would I prefer to play aI or ˆaI?") , strategy queries ("what is my vector of payoffs if I play strategy aI?") , and world-identity queries ("is the world w E W the real world?") . When one can infer a polynomial bound on the number of queries made by a rational player, then our results yield efficient solutions. (For example, we can efficiently solve games in which every groundset element - y E Γ has δ ({- y}) = Ω (M − M), where M and M denote the maximum and minimum payoffs to any player in any world.) Conversely, it is NP-hard to compute a Nash equilibrium for such a game when every δ ({- y}) <1 / | W | 2, even when the worlds are constant sum and Player II has only a single available strategy. Thus even computing a best response for Player I is hard. (This proof proceeds by reduction from set cover; intuitively, for sufficiently low query costs, Player I must fully identify the actual world through his queries. Selecting a minimum-sized set of these queries is hard.) An interesting open question is whether this approximate best-response calculation can be leveraged to find an approximate Nash equilibrium.
I-73
Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed
In open MAS it is often a problem to achieve agents' interoperability. The heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation. In this paper we propose the use of an ontology to deal with this issue. We experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology. This data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models.
[ "reput valu", "reput", "heterogen agent", "reput model", "art testb", "art testb", "interoper", "trust", "ontolog", "multiag system", "autonom distribut agent", "reput format", "agent architectur", "function ontolog of reput" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "R", "M", "M" ]
Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed Anarosa A. F. Brandão1 , Laurent Vercouter2 , Sara Casare1 and Jaime Sichman1 1 Laboratório de Técnicas Inteligentes - EP/USP Av. Prof. Luciano Gualberto, 158, trav. 3, 05508-970, São Paulo - Brazil +55 11 3091 5397 anarosabrandao@gmail.com, {sara.casare,jaime.sichman}@poli. usp.br 2 Ecole Nationale Supérieure des Mines de Saint-Etienne 158, cours Fauriel, 42023 Saint-Etienne Cedex 2, France Laurent.Vercouter@emse.fr ABSTRACT In open MAS it is often a problem to achieve agents' interoperability. The heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation. In this paper we propose the use of an ontology to deal with this issue. We experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology. This data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Design, Experimentation, Standardization. 1. INTRODUCTION Open multiagent systems (MAS) are composed of autonomous distributed agents that may enter and leave the agent society at their will because open systems have no centralized control over the development of its parts [1]. Since agents are considered as autonomous entities, we cannot assume that there is a way to control their internal behavior. These features are interesting to obtain flexible and adaptive systems but they also create new risks about the reliability and the robustness of the system. Solutions to this problem have been proposed by the way of trust models where agents are endowed with a model of other agents that allows them to decide if they can or cannot trust another agent. Such trust decision is very important because it is an essential condition to the formation of agents' cooperation. The trust decision processes use the concept of reputation as the basis of a decision. Reputation is a subject that has been studied in several works [4][5][8][9] with different approaches, but also with different semantics attached to the reputation concept. Casare and Sichman [2][3] proposed a Functional Ontology of Reputation (FORe) and some directions about how it could be used to allow the interoperability among different agent reputation models. This paper describes how the FORe can be applied to allow interoperability among agents that have different reputation models. An outline of this approach is sketched in the context of a testbed for the experimentation and comparison of trust models, the ART testbed [6]. 2. THE FUNCTIONAL ONTOLOGY OF REPUTATION (FORe) In the last years several computational models of reputation have been proposed [7][10][13][14]. As an example of research produced in the MAS field we refer to three of them: a cognitive reputation model [5], a typology of reputation [7] and the reputation model used in the ReGret system [9][10]. Each model includes its own specific concepts that may not exist in other models, or exist with a different name. For instance, Image and Reputation are two central concepts in the cognitive reputation model. These concepts do not exist in the typology of reputation or in the ReGret model. In the typology of reputation, we can find some similar concepts such as direct reputation and indirect reputation but there are some slight semantic differences. In the same way, the ReGret model includes four kinds of reputation (direct, witness, neighborhood and system) that overlap with the concepts of other models but that are not exactly the same. The Functional Ontology of Reputation (FORe) was defined as a common semantic basis that subsumes the concepts of the main reputation models. The FORe includes, as its kernel, the following concepts: reputation nature, roles involved in reputation formation and propagation, information sources for reputation, evaluation of reputation, and reputation maintenance. The ontology concept ReputationNature is composed of concepts such as IndividualReputation, GroupReputation and ProductReputation. Reputation formation and propagation involves several roles, played by the entities or agents that participate in those processes. The ontology defines the concepts ReputationProcess and ReputationRole. Moreover, reputation can be classified according to the origin of beliefs and opinions that can derive from several sources. The ontology defines the concept ReputationType which can be PrimaryReputation or SecondaryReputation. PrimaryReputation is composed of concepts ObservedReputation and DirectReputation and the concept SecondaryReputation is composed of concepts such as PropagatedReputation and CollectiveReputation. More details about the FORe can be found on [2][3]. 3. MAPPING THE AGENT REPUTATION MODELS TO THE FORe Visser et al [12] suggest three different ways to support semantic integration of different sources of information: a centralized approach, where each source of information is related to one common domain ontology; a decentralized approach, where every source of information is related to its own ontology; and a hybrid approach, where every source of information has its own ontology and the vocabulary of these ontologies are related to a common ontology. This latter organizes the common global vocabulary in order to support the source ontologies comparison. Casare and Sichman [3] used the hybrid approach to show that the FORe serves as a common ontology for several reputation models. Therefore, considering the ontologies which describe the agent reputation models we can define a mapping between these ontologies and the FORe whenever the ontologies use a common vocabulary. Also, the information concerning the mappings between the agent reputation models and the FORe can be directly inferred by simply classifying the resulting ontology from the integration of a given reputation model ontology and the FORe in an ontology tool with reasoning engine. For instance, a mapping between the Cognitive Reputation Model ontology and the FORe relates the concepts Image and Reputation to PrimaryReputation and SecondaryReputation from FORe, respectively. Also, a mapping between the Typology of Reputation and the FORe relates the concepts Direct Reputation and Indirect Reputation to PrimaryReputation and SecondaryReputation from FORe, respectively. Nevertheless, the concepts Direct Trust and Witness Reputation from the Regret System Reputation Model are mapped to PrimaryReputation and PropagatedReputation from FORe. Since PropagatedReputation is a sub-concept of SecondaryReputation, it can be inferred that Witness Reputation is also mapped to SecondaryReputation. 4. EXPERIMENTAL SCENARIOS USING THE ART TESTBED To exemplify the use of mappings from last section, we define a scenario where several agents are implemented using different agent reputation models. This scenario includes the agents'' interaction during the simulation of the game defined by ART [6] in order to describe the ways interoperability is possible between different trust models using the FORe. 4.1 The ART testbed The ART testbed provides a simulation engine on which several agents, using different trust models, may run. The simulation consists in a game where the agents have to decide to trust or not other agents. The game``s domain is art appraisal, in which agents are required to evaluate the value of paintings based on information exchanged among other agents during agents'' interaction. The information can be an opinion transaction, when an agent asks other agents to help it in its evaluation of a painting; or a reputation transaction, when the information required is about the reputation of another agent (a target) for a given era. More details about the ART testbed can be found in [6]. The ART common reputation model was enhanced with semantic data obtained from FORe. A general agent architecture for interoperability was defined [11] to allow agents to reason about the information received from reputation interactions. This architecture contains two main modules: the Reputation Mapping Module (RMM) which is responsible for mapping concepts between an agent reputation model and FORe; and the Reputation Reasoning Module (RRM) which is responsible for deal with information about reputation according to the agent reputation model. 4.2 Reputation transaction scenarios While including the FORe to the ART common reputation model, we have incremented it to allow richer interactions that involve reputation transaction. In this section we describe scenarios concerning reputation transactions in the context of ART testbed, but the first is valid for any kind of reputation transaction and the second is specific for the ART domain. 4.2.1 General scenario Suppose that agents A, B and C are implemented according to the aforementioned general agent architecture with the enhanced ART common reputation model, using different reputation models. Agent A uses the Typology of Reputation model, agent B uses the Cognitive Reputation Model and agent C uses the ReGret System model. Consider the interaction about reputation where agents A and B receive from agent C information about the reputation of agent Y. A big picture of this interaction is showed in Figure 2. ReGret Ontology (Y, value=0.8, witnessreputation) C Typol. Ontology (Y, value=0.8, propagatedreputation) A CogMod. Ontology (Y, value=0.8, reputation) B (Y, value=0.8, PropagatedReputation) (Y, value=0.8, PropagatedReputation) ReGret Ontology (Y, value=0.8, witnessreputation) C ReGret Ontology (Y, value=0.8, witnessreputation) ReGret Ontology (Y, value=0.8, witnessreputation) (Y, value=0.8, witnessreputation) C Typol. Ontology (Y, value=0.8, propagatedreputation) A Typol. Ontology (Y, value=0.8, propagatedreputation) Typol. Ontology (Y, value=0.8, propagatedreputation) (Y, value=0.8, propagatedreputation) A CogMod. Ontology (Y, value=0.8, reputation) B CogMod. Ontology (Y, value=0.8, reputation) CogMod. Ontology (Y, value=0.8, reputation) (Y, value=0.8, reputation) B (Y, value=0.8, PropagatedReputation) (Y, value=0.8, PropagatedReputation) (Y, value=0.8, PropagatedReputation) (Y, value=0.8, PropagatedReputation) Figure 1. Interaction about reputation The information witness reputation from agent C is treated by its RMM and is sent as PropagatedReputation to both agents. The corresponding information in agent A reputation model is propagated reputation and in agent B reputation model is reputation. The way agents A and B make use of the information depends on their internal reputation model and their RRM implementation. 1048 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2.2 ART scenario Considering the same agents A and B and the art appraisal domain of ART, another interesting scenario describes the following situation: agent A asks to agent B information about agents it knows that have skill on some specific painting era. In this case agent A wants information concerning the direct reputation agent B has about agents that have skill on an specific era, such as cubism. Following the same steps of the previous scenario, agent A message is prepared in its RRM using information from its internal model. A big picture of this interaction is in Figure 2. Typol. Ontology (agent = ? , value = ? , skill = cubism, reputation = directreputation) A (agent = ? , value = ? , skill = cubism, reputation = PrimaryReputation) CogMod. Ontology (agent = ? , value = ? , skill = cubism, reputation = image) B Typol. Ontology (agent = ? , value = ? , skill = cubism, reputation = directreputation) A (agent = ? , value = ? , skill = cubism, reputation = PrimaryReputation) CogMod. Ontology (agent = ? , value = ? , skill = cubism, reputation = image) B Figure 2. Interaction about specific types of reputation values Agent B response to agent A is processed in its RRM and it is composed of tuples (agent, value, cubism, image) , where the pair (agent, value) is composed of all agents and associated reputation values whose agent B knows their expertise about cubism by its own opinion. This response is forwarded to the RMM in order to be translated to the enriched common model and to be sent to agent A. After receiving the information sent by agent B, agent A processes it in its RMM and translates it to its own reputation model to be analyzed by its RRM. 5. CONCLUSION In this paper we present a proposal for reducing the incompatibility between reputation models by using a general agent architecture for reputation interaction which relies on a functional ontology of reputation (FORe), used as a globally shared reputation model. A reputation mapping module allows agents to translate information from their internal reputation model into the shared model and vice versa. The ART testbed has been enriched to use the ontology during agent transactions. Some scenarios were described to illustrate our proposal and they seem to be a promising way to improve the process of building reputation just using existing technologies. 6. ACKNOWLEDGMENTS Anarosa A. F. Brandão is supported by CNPq/Brazil grant 310087/2006-6 and Jaime Sichman is partially supported by CNPq/Brazil grants 304605/2004-2, 482019/2004-2 and 506881/2004-1. Laurent Vercouter was partially supported by FAPESP grant 2005/02902-5. 7. REFERENCES [1] Agha, G. A. Abstracting Interaction Patterns: A Programming Paradigm for Open Distributed Systems, In (Eds) E. Najm and J.-B. Stefani, Formal Methods for Open Object-based Distributed Systems IFIP Transactions, 1997, Chapman Hall. [2] Casare,S. and Sichman, J.S. Towards a Functional Ontology of Reputation, In Proc of the 4th Intl Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS``05), Utrecht, The Netherlands, 2005, v.2, pp. 505-511. [3] Casare, S. and Sichman, J.S. Using a Functional Ontology of Reputation to Interoperate Different Agent Reputation Models, Journal of the Brazilian Computer Society, (2005), 11(2), pp. 79-94. [4] Castelfranchi, C. and Falcone, R. Principles of trust in MAS: cognitive anatomy, social importance and quantification. In Proceedings of ICMAS``98, Paris, 1998, pp. 72-79. [5] Conte, R. and Paolucci, M. Reputation in Artificial Societies: Social Beliefs for Social Order, Kluwer Publ., 2002. [6] Fullam, K.; Klos, T.; Muller, G.; Sabater, J.; Topol, Z.; Barber, S.;Rosenchein, J.; Vercouter, L. and Voss, M. A specification of the agent reputation and trust (art) testbed: experimentation and competition for trust in agent societies. In Proc. of the 4th Intl.. Joint Conf on Autonomous Agents and Multiagent Systems (AAMAS``05), ACM, 2005, 512-158. [7] Mui, L.; Halberstadt, A.; Mohtashemi, M. Notions of Reputation in Multi-Agents Systems: A Review. In: Proc of 1st Intl.. Joint Conf. on Autonomous Agents and Multi-agent Systems (AAMAS 2002), Bologna, Italy, 2002, 1, 280-287. [8] Muller, G. and Vercouter, L. Decentralized monitoring of agent communication with a reputation model. In Trusting Agents for Trusting Electronic Societies, LNCS 3577, 2005, pp. 144-161. [9] Sabater, J. and Sierra, C. ReGret: Reputation in gregarious societies. In Müller, J. et al (Eds) Proc. of the 5th Intl.. Conf. on Autonomous Agents, Canada, 2001, ACM, 194-195. [10] Sabater, J. and Sierra, C. Review on Computational Trust and Reputation Models. In: Artificial Intelligence Review, Kluwer Acad. Publ., (2005), v. 24, n. 1, pp. 33 - 60. [11] Vercouter,L, Casare, S., Sichman, J. and Brandão, A. An experience on reputation models interoperability based on a functional ontology In Proc. of the 20th IJCAI, Hyderabad, India, 2007, pp.617-622. [12] Visser, U.; Stuckenschmidt, H.; Wache, H. and Vogele, T. Enabling technologies for inter-operability. In: In U. Visser and H. Pundt, Eds, Workshop on the 14th Intl Symp. of Computer Science for Environmental Protection, Bonn, Germany, 2000, pp. 35-46. [13] Yu, B. and Singh, M.P.. An Evidential Model of Distributed Reputation Management. In: Proc. of the 1st Intl Joint Conf. on Autonomous Agents and Multi-agent Systems (AAMAS 2002), Bologna, Italy, 2002, part 1, pp. 294 - 301. [14] Zacharia, G. and Maes, P. Trust Management Through Reputation Mechanisms. In: Applied Artificial Intelligence, 14(9), 2000, pp. 881-907. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1049
Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed ABSTRACT In open MAS it is often a problem to achieve agents' interoperability. The heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation. In this paper we propose the use of an ontology to deal with this issue. We experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology. This data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models. 1. INTRODUCTION Open multiagent systems (MAS) are composed of autonomous distributed agents that may enter and leave the agent society at their will because open systems have no centralized control over the development of its parts [1]. Since agents are considered as autonomous entities, we cannot assume that there is a way to control their internal behavior. These features are interesting to obtain flexible and adaptive systems but they also create new risks about the reliability and the robustness of the system. Solutions to this problem have been proposed by the way of trust models where agents are endowed with a model of other agents that allows them to decide if they can or cannot trust another agent. Such trust decision is very important because it is an essential Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. condition to the formation of agents' cooperation. The trust decision processes use the concept of reputation as the basis of a decision. Reputation is a subject that has been studied in several works [4] [5] [8] [9] with different approaches, but also with different semantics attached to the reputation concept. Casare and Sichman [2] [3] proposed a Functional Ontology of Reputation (FORe) and some directions about how it could be used to allow the interoperability among different agent reputation models. This paper describes how the FORe can be applied to allow interoperability among agents that have different reputation models. An outline of this approach is sketched in the context of a testbed for the experimentation and comparison of trust models, the ART testbed [6]. 2. THE FUNCTIONAL ONTOLOGY OF REPUTATION (FORe) In the last years several computational models of reputation have been proposed [7] [10] [13] [14]. As an example of research produced in the MAS field we refer to three of them: a cognitive reputation model [5], a typology of reputation [7] and the reputation model used in the ReGret system [9] [10]. Each model includes its own specific concepts that may not exist in other models, or exist with a different name. For instance, Image and Reputation are two central concepts in the cognitive reputation model. These concepts do not exist in the typology of reputation or in the ReGret model. In the typology of reputation, we can find some similar concepts such as direct reputation and indirect reputation but there are some slight semantic differences. In the same way, the ReGret model includes four kinds of reputation (direct, witness, neighborhood and system) that overlap with the concepts of other models but that are not exactly the same. The Functional Ontology of Reputation (FORe) was defined as a common semantic basis that subsumes the concepts of the main reputation models. The FORe includes, as its kernel, the following concepts: reputation nature, roles involved in reputation formation and propagation, information sources for reputation, evaluation of reputation, and reputation maintenance. The ontology concept ReputationNature is composed of concepts such as IndividualReputation, GroupReputation and ProductReputation. Reputation formation and propagation involves several roles, played by the entities or agents that participate in those processes. The ontology defines the concepts ReputationProcess and ReputationRole. Moreover, reputation can be classified according to the origin of beliefs and opinions that can derive from several 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS sources. The ontology defines the concept ReputationType which can be PrimaryReputation or SecondaryReputation. PrimaryReputation is composed of concepts ObservedReputation and DirectReputation and the concept SecondaryReputation is composed of concepts such as PropagatedReputation and CollectiveReputation. More details about the FORe can be found on [2] [3]. 3. MAPPING THE AGENT REPUTATION MODELS TO THE FORe Visser et al [12] suggest three different ways to support semantic integration of different sources of information: a centralized approach, where each source of information is related to one common domain ontology; a decentralized approach, where every source of information is related to its own ontology; and a hybrid approach, where every source of information has its own ontology and the vocabulary of these ontologies are related to a common ontology. This latter organizes the common global vocabulary in order to support the source ontologies comparison. Casare and Sichman [3] used the hybrid approach to show that the FORe serves as a common ontology for several reputation models. Therefore, considering the ontologies which describe the agent reputation models we can define a mapping between these ontologies and the FORe whenever the ontologies use a common vocabulary. Also, the information concerning the mappings between the agent reputation models and the FORe can be directly inferred by simply classifying the resulting ontology from the integration of a given reputation model ontology and the FORe in an ontology tool with reasoning engine. For instance, a mapping between the Cognitive Reputation Model ontology and the FORe relates the concepts Image and Reputation to PrimaryReputation and SecondaryReputation from FORe, respectively. Also, a mapping between the Typology of Reputation and the FORe relates the concepts Direct Reputation and Indirect Reputation to PrimaryReputation and SecondaryReputation from FORe, respectively. Nevertheless, the concepts Direct Trust and Witness Reputation from the Regret System Reputation Model are mapped to PrimaryReputation and PropagatedReputation from FORe. Since PropagatedReputation is a sub-concept of SecondaryReputation, it can be inferred that Witness Reputation is also mapped to SecondaryReputation. 4. EXPERIMENTAL SCENARIOS USING THE ART TESTBED To exemplify the use of mappings from last section, we define a scenario where several agents are implemented using different agent reputation models. This scenario includes the agents' interaction during the simulation of the game defined by ART [6] in order to describe the ways interoperability is possible between different trust models using the FORe. 4.1 The ART testbed The ART testbed provides a simulation engine on which several agents, using different trust models, may run. The simulation consists in a game where the agents have to decide to trust or not other agents. The game's domain is art appraisal, in which agents are required to evaluate the value of paintings based on information exchanged among other agents during agents' interaction. The information can be an opinion transaction, when an agent asks other agents to help it in its evaluation of a painting; or a reputation transaction, when the information required is about the reputation of another agent (a target) for a given era. More details about the ART testbed can be found in [6]. The ART common reputation model was enhanced with semantic data obtained from FORe. A general agent architecture for interoperability was defined [11] to allow agents to reason about the information received from reputation interactions. This architecture contains two main modules: the Reputation Mapping Module (RMM) which is responsible for mapping concepts between an agent reputation model and FORe; and the Reputation Reasoning Module (RRM) which is responsible for deal with information about reputation according to the agent reputation model. 4.2 Reputation transaction scenarios While including the FORe to the ART common reputation model, we have incremented it to allow richer interactions that involve reputation transaction. In this section we describe scenarios concerning reputation transactions in the context of ART testbed, but the first is valid for any kind of reputation transaction and the second is specific for the ART domain. 4.2.1 General scenario Suppose that agents A, B and C are implemented according to the aforementioned general agent architecture with the enhanced ART common reputation model, using different reputation models. Agent A uses the Typology of Reputation model, agent B uses the Cognitive Reputation Model and agent C uses the ReGret System model. Consider the interaction about reputation where agents A and B receive from agent C information about the reputation of agent Y. A big picture of this interaction is showed in Figure 2. Figure 1. Interaction about reputation The information witness reputation from agent C is treated by its RMM and is sent as PropagatedReputation to both agents. The corresponding information in agent A reputation model is propagated reputation and in agent B reputation model is reputation. The way agents A and B make use of the information depends on their internal reputation model and their RRM implementation. 1048 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2.2 ART scenario Considering the same agents A and B and the art appraisal domain of ART, another interesting scenario describes the following situation: agent A asks to agent B information about agents it knows that have skill on some specific painting era. In this case agent A wants information concerning the direct reputation agent B has about agents that have skill on an specific era, such as cubism. Following the same steps of the previous scenario, agent A message is prepared in its RRM using information from its internal model. A big picture of this interaction is in Figure 2. Figure 2. Interaction about specific types of reputation values Agent B response to agent A is processed in its RRM and it is composed of tuples (agent, value, cubism, image), where the pair (agent, value) is composed of all agents and associated reputation values whose agent B knows their expertise about cubism by its own opinion. This response is forwarded to the RMM in order to be translated to the enriched common model and to be sent to agent A. After receiving the information sent by agent B, agent A processes it in its RMM and translates it to its own reputation model to be analyzed by its RRM. 5. CONCLUSION In this paper we present a proposal for reducing the incompatibility between reputation models by using a general agent architecture for reputation interaction which relies on a functional ontology of reputation (FORe), used as a globally shared reputation model. A reputation mapping module allows agents to translate information from their internal reputation model into the shared model and vice versa. The ART testbed has been enriched to use the ontology during agent transactions. Some scenarios were described to illustrate our proposal and they seem to be a promising way to improve the process of building reputation just using existing technologies.
Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed ABSTRACT In open MAS it is often a problem to achieve agents' interoperability. The heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation. In this paper we propose the use of an ontology to deal with this issue. We experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology. This data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models. 1. INTRODUCTION Open multiagent systems (MAS) are composed of autonomous distributed agents that may enter and leave the agent society at their will because open systems have no centralized control over the development of its parts [1]. Since agents are considered as autonomous entities, we cannot assume that there is a way to control their internal behavior. These features are interesting to obtain flexible and adaptive systems but they also create new risks about the reliability and the robustness of the system. Solutions to this problem have been proposed by the way of trust models where agents are endowed with a model of other agents that allows them to decide if they can or cannot trust another agent. Such trust decision is very important because it is an essential Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. condition to the formation of agents' cooperation. The trust decision processes use the concept of reputation as the basis of a decision. Reputation is a subject that has been studied in several works [4] [5] [8] [9] with different approaches, but also with different semantics attached to the reputation concept. Casare and Sichman [2] [3] proposed a Functional Ontology of Reputation (FORe) and some directions about how it could be used to allow the interoperability among different agent reputation models. This paper describes how the FORe can be applied to allow interoperability among agents that have different reputation models. An outline of this approach is sketched in the context of a testbed for the experimentation and comparison of trust models, the ART testbed [6]. 2. THE FUNCTIONAL ONTOLOGY OF REPUTATION (FORe) 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS 3. MAPPING THE AGENT REPUTATION MODELS TO THE FORe 4. EXPERIMENTAL SCENARIOS USING THE ART TESTBED 4.1 The ART testbed 4.2 Reputation transaction scenarios 4.2.1 General scenario 1048 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2.2 ART scenario 5. CONCLUSION In this paper we present a proposal for reducing the incompatibility between reputation models by using a general agent architecture for reputation interaction which relies on a functional ontology of reputation (FORe), used as a globally shared reputation model. A reputation mapping module allows agents to translate information from their internal reputation model into the shared model and vice versa. The ART testbed has been enriched to use the ontology during agent transactions. Some scenarios were described to illustrate our proposal and they seem to be a promising way to improve the process of building reputation just using existing technologies.
Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed ABSTRACT In open MAS it is often a problem to achieve agents' interoperability. The heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation. In this paper we propose the use of an ontology to deal with this issue. We experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology. This data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models. 1. INTRODUCTION Open multiagent systems (MAS) are composed of autonomous distributed agents that may enter and leave the agent society at their will because open systems have no centralized control over the development of its parts [1]. Since agents are considered as autonomous entities, we cannot assume that there is a way to control their internal behavior. Solutions to this problem have been proposed by the way of trust models where agents are endowed with a model of other agents that allows them to decide if they can or cannot trust another agent. condition to the formation of agents' cooperation. The trust decision processes use the concept of reputation as the basis of a decision. Reputation is a subject that has been studied in several works [4] [5] [8] [9] with different approaches, but also with different semantics attached to the reputation concept. Casare and Sichman [2] [3] proposed a Functional Ontology of Reputation (FORe) and some directions about how it could be used to allow the interoperability among different agent reputation models. This paper describes how the FORe can be applied to allow interoperability among agents that have different reputation models. An outline of this approach is sketched in the context of a testbed for the experimentation and comparison of trust models, the ART testbed [6]. 5. CONCLUSION In this paper we present a proposal for reducing the incompatibility between reputation models by using a general agent architecture for reputation interaction which relies on a functional ontology of reputation (FORe), used as a globally shared reputation model. A reputation mapping module allows agents to translate information from their internal reputation model into the shared model and vice versa. The ART testbed has been enriched to use the ontology during agent transactions. Some scenarios were described to illustrate our proposal and they seem to be a promising way to improve the process of building reputation just using existing technologies.
I-66
Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions. Though this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality. A second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost. This paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time. Experimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms.
[ "network", "pomdp", "qualiti guarante polici", "distribut partial observ markov decis problem", "distribut pomdp", "uncertain domain", "approxim solut", "global optim", "agent interact structur", "heurist", "polici search", "qualiti guarante approxim", "multi-agent system", "agent network", "branch and bound heurist search techniqu", "heurist function", "optim joint polici", "network structur", "depth first search", "distribut sensor network", "overal joint reward", "maximum constrain node", "partial observ markov decis process", "global optim solut" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "M", "M", "M", "R", "M", "M", "U", "U", "M", "R" ]
Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies Pradeep Varakantham, Janusz Marecki, Yuichi Yabu∗ , Milind Tambe, Makoto Yokoo∗ University of Southern California, Los Angeles, CA 90089, {varakant, marecki, tambe}@usc. edu ∗ Dept. of Intelligent Systems, Kyushu University, Fukuoka, 812-8581 Japan, yokoo@is.kyushu-u.ac.jp ABSTRACT Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions. Though this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality. A second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost. This paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time. Experimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceMulti-agent Systems General Terms Algorithms, Theory 1. INTRODUCTION Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are emerging as a popular approach for modeling sequential decision making in teams operating under uncertainty [9, 4, 1, 2, 13]. The uncertainty arises on account of nondeterminism in the outcomes of actions and because the world state may only be partially (or incorrectly) observable. Unfortunately, as shown by Bernstein et al. [3], the problem of finding the optimal joint policy for general distributed POMDPs is NEXP-Complete. Researchers have attempted two different types of approaches towards solving these models. The first category consists of highly efficient approximate techniques, that may not reach globally optimal solutions [2, 9, 11]. The key problem with these techniques has been their inability to provide any guarantees on the quality of the solution. In contrast, the second less popular category of approaches has focused on a global optimal result [13, 5, 10]. Though these approaches obtain optimal solutions, they typically consider only two agents. Furthermore, they fail to exploit structure in the interactions of the agents and hence are severely hampered with respect to scalability when considering more than two agents. To address these problems with the existing approaches, we propose approximate techniques that provide guarantees on the quality of the solution while focussing on a network of more than two agents. We first propose the basic SPIDER (Search for Policies In Distributed EnviRonments) algorithm. There are two key novel features in SPIDER: (i) it is a branch and bound heuristic search technique that uses a MDP-based heuristic function to search for an optimal joint policy; (ii) it exploits network structure of agents by organizing agents into a Depth First Search (DFS) pseudo tree and takes advantage of the independence in the different branches of the DFS tree. We then provide three enhancements to improve the efficiency of the basic SPIDER algorithm while providing guarantees on the quality of the solution. The first enhancement uses abstractions for speedup, but does not sacrifice solution quality. In particular, it initially performs branch and bound search on abstract policies and then extends to complete policies. The second enhancement obtains speedups by sacrificing solution quality, but within an input parameter that provides the tolerable expected value difference from the optimal solution. The third enhancement is again based on bounding the search for efficiency, however with a tolerance parameter that is provided as a percentage of optimal. We experimented with the sensor network domain presented in Nair et al. [10], a domain representative of an important class of problems with networks of agents working in uncertain environments. In our experiments, we illustrate that SPIDER dominates an existing global optimal approach called GOA [10], the only known global optimal algorithm with demonstrated experimental results for more than two agents. Furthermore, we demonstrate that abstraction improves the performance of SPIDER significantly (while providing optimal solutions). We finally demonstrate a key feature of SPIDER: by utilizing the approximation enhancements it enables principled tradeoffs in run-time versus solution quality. 822 978-81-904262-7-5 (RPS) c 2007 IFAAMAS 2. DOMAIN: DISTRIBUTED SENSOR NETS Distributed sensor networks are a large, important class of domains that motivate our work. This paper focuses on a set of target tracking problems that arise in certain types of sensor networks [6] first introduced in [10]. Figure 1 shows a specific problem instance within this type consisting of three sensors. Here, each sensor node can scan in one of four directions: North, South, East or West (see Figure 1). To track a target and obtain associated reward, two sensors with overlapping scanning areas must coordinate by scanning the same area simultaneously. In Figure 1, to track a target in Loc11, sensor1 needs to scan `East'' and sensor2 needs to scan `West'' simultaneously. Thus, sensors have to act in a coordinated fashion. We assume that there are two independent targets and that each target``s movement is uncertain and unaffected by the sensor agents. Based on the area it is scanning, each sensor receives observations that can have false positives and false negatives. The sensors'' observations and transitions are independent of each other``s actions e.g.the observations that sensor1 receives are independent of sensor2``s actions. Each agent incurs a cost for scanning whether the target is present or not, but no cost if it turns off. Given the sensors'' observational uncertainty, the targets'' uncertain transitions and the distributed nature of the sensor nodes, these sensor nets provide a useful domains for applying distributed POMDP models. Figure 1: A 3-chain sensor configuration 3. BACKGROUND 3.1 Model: Network Distributed POMDP The ND-POMDP model was introduced in [10], motivated by domains such as the sensor networks introduced in Section 2. It is defined as the tuple S, A, P, Ω, O, R, b , where S = ×1≤i≤nSi × Su is the set of world states. Si refers to the set of local states of agent i and Su is the set of unaffectable states. Unaffectable state refers to that part of the world state that cannot be affected by the agents'' actions, e.g. environmental factors like target locations that no agent can control. A = ×1≤i≤nAi is the set of joint actions, where Ai is the set of action for agent i. ND-POMDP assumes transition independence, where the transition function is defined as P(s, a, s ) = Pu(su, su) · 1≤i≤n Pi(si, su, ai, si), where a = a1, ... , an is the joint action performed in state s = s1, ... , sn, su and s = s1, ... , sn, su is the resulting state. Ω = ×1≤i≤nΩi is the set of joint observations where Ωi is the set of observations for agents i. Observational independence is assumed in ND-POMDPs i.e., the joint observation function is defined as O(s, a, ω) = 1≤i≤n Oi(si, su, ai, ωi), where s = s1, ... , sn, su is the world state that results from the agents performing a = a1, ... , an in the previous state, and ω = ω1, ... , ωn ∈ Ω is the observation received in state s. This implies that each agent``s observation depends only on the unaffectable state, its local action and on its resulting local state. The reward function, R, is defined as R(s, a) = l Rl(sl1, ... , slr, su, al1, ... , alr ), where each l could refer to any sub-group of agents and r = |l|. Based on the reward function, an interaction hypergraph is constructed. A hyper-link, l, exists between a subset of agents for all Rl that comprise R. The interaction hypergraph is defined as G = (Ag, E), where the agents, Ag, are the vertices and E = {l|l ⊆ Ag ∧ Rl is a component of R} are the edges. The initial belief state (distribution over the initial state), b, is defined as b(s) = bu(su) · 1≤i≤n bi(si), where bu and bi refer to the distribution over initial unaffectable state and agent i``s initial belief state, respectively. The goal in ND-POMDP is to compute the joint policy π = π1, ... , πn that maximizes team``s expected reward over a finite horizon T starting from the belief state b. An ND-POMDP is similar to an n-ary Distributed Constraint Optimization Problem (DCOP)[8, 12] where the variable at each node represents the policy selected by an individual agent, πi with the domain of the variable being the set of all local policies, Πi. The reward component Rl where |l| = 1 can be thought of as a local constraint while the reward component Rl where l > 1 corresponds to a non-local constraint in the constraint graph. 3.2 Algorithm: Global Optimal Algorithm (GOA) In previous work, GOA has been defined as a global optimal algorithm for ND-POMDPs [10]. We will use GOA in our experimental comparisons, since GOA is a state-of-the-art global optimal algorithm, and in fact the only one with experimental results available for networks of more than two agents. GOA borrows from a global optimal DCOP algorithm called DPOP[12]. GOA``s message passing follows that of DPOP. The first phase is the UTIL propagation, where the utility messages, in this case values of policies, are passed up from the leaves to the root. Value for a policy at an agent is defined as the sum of best response values from its children and the joint policy reward associated with the parent policy. Thus, given a policy for a parent node, GOA requires an agent to iterate through all its policies, finding the best response policy and returning the value to the parent - while at the parent node, to find the best policy, an agent requires its children to return their best responses to each of its policies. This UTIL propagation process is repeated at each level in the tree, until the root exhausts all its policies. In the second phase of VALUE propagation, where the optimal policies are passed down from the root till the leaves. GOA takes advantage of the local interactions in the interaction graph, by pruning out unnecessary joint policy evaluations (associated with nodes not connected directly in the tree). Since the interaction graph captures all the reward interactions among agents and as this algorithm iterates through all the relevant joint policy evaluations, this algorithm yields a globally optimal solution. 4. SPIDER As mentioned in Section 3.1, an ND-POMDP can be treated as a DCOP, where the goal is to compute a joint policy that maximizes the overall joint reward. The brute-force technique for computing an optimal policy would be to examine the expected values for all possible joint policies. The key idea in SPIDER is to avoid computation of expected values for the entire space of joint policies, by utilizing upper bounds on the expected values of policies and the interaction structure of the agents. Akin to some of the algorithms for DCOP [8, 12], SPIDER has a pre-processing step that constructs a DFS tree corresponding to the given interaction structure. Note that these DFS trees are pseudo trees [12] that allow links between ancestors and children. We employ the Maximum Constrained Node (MCN) heuristic used in the DCOP algorithm, ADOPT [8], however other heuristics (such as MLSP heuristic from [7]) can also be employed. MCN heuristic tries to place agents with more number of constraints at the top of the tree. This tree governs how the search for the optimal joint polThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 823 icy proceeds in SPIDER. The algorithms presented in this paper are easily extendable to hyper-trees, however for expository purposes, we assume binary trees. SPIDER is an algorithm for centralized planning and distributed execution in distributed POMDPs. In this paper, we employ the following notation to denote policies and expected values: Ancestors(i) ⇒ agents from i to the root (not including i). Tree(i) ⇒ agents in the sub-tree (not including i) for which i is the root. πroot+ ⇒ joint policy of all agents. πi+ ⇒ joint policy of all agents in Tree(i) ∪ i. πi− ⇒ joint policy of agents that are in Ancestors(i). πi ⇒ policy of the ith agent. ˆv[πi, πi− ] ⇒ upper bound on the expected value for πi+ given πi and policies of ancestor agents i.e. πi− . ˆvj[πi, πi− ] ⇒ upper bound on the expected value for πi+ from the jth child. v[πi, πi− ] ⇒ expected value for πi given policies of ancestor agents, πi− . v[πi+ , πi− ] ⇒ expected value for πi+ given policies of ancestor agents, πi− . vj[πi+ , πi− ] ⇒ expected value for πi+ from the jth child. Figure 2: Execution of SPIDER, an example 4.1 Outline of SPIDER SPIDER is based on the idea of branch and bound search, where the nodes in the search tree represent partial/complete joint policies. Figure 2 shows an example search tree for the SPIDER algorithm, using an example of the three agent chain. Before SPIDER begins its search we create a DFS tree (i.e. pseudo tree) from the three agent chain, with the middle agent as the root of this tree. SPIDER exploits the structure of this DFS tree while engaging in its search. Note that in our example figure, each agent is assigned a policy with T=2. Thus, each rounded rectange (search tree node) indicates a partial/complete joint policy, a rectangle indicates an agent and the ovals internal to an agent show its policy. Heuristic or actual expected value for a joint policy is indicated in the top right corner of the rounded rectangle. If the number is italicized and underlined, it implies that the actual expected value of the joint policy is provided. SPIDER begins with no policy assigned to any of the agents (shown in the level 1 of the search tree). Level 2 of the search tree indicates that the joint policies are sorted based on upper bounds computed for root agent``s policies. Level 3 shows one SPIDER search node with a complete joint policy (a policy assigned to each of the agents). The expected value for this joint policy is used to prune out the nodes in level 2 (the ones with upper bounds < 234) When creating policies for each non-leaf agent i, SPIDER potentially performs two steps: 1. Obtaining upper bounds and sorting: In this step, agent i computes upper bounds on the expected values, ˆv[πi, πi− ] of the joint policies πi+ corresponding to each of its policy πi and fixed ancestor policies. An MDP based heuristic is used to compute these upper bounds on the expected values. Detailed description about this MDP heuristic is provided in Section 4.2. All policies of agent i, Πi are then sorted based on these upper bounds (also referred to as heuristic values henceforth) in descending order. Exploration of these policies (in step 2 below) are performed in this descending order. As indicated in the level 2 of the search tree (of Figure 2), all the joint policies are sorted based on the heuristic values, indicated in the top right corner of each joint policy. The intuition behind sorting and then exploring policies in descending order of upper bounds, is that the policies with higher upper bounds could yield joint policies with higher expected values. 2. Exploration and Pruning: Exploration implies computing the best response joint policy πi+,∗ corresponding to fixed ancestor policies of agent i, πi− . This is performed by iterating through all policies of agent i i.e. Πi and summing two quantities for each policy: (i) the best response for all of i``s children (obtained by performing steps 1 and 2 at each of the child nodes); (ii) the expected value obtained by i for fixed policies of ancestors. Thus, exploration of a policy πi yields actual expected value of a joint policy, πi+ represented as v[πi+ , πi− ]. The policy with the highest expected value is the best response policy. Pruning refers to avoiding exploring all policies (or computing expected values) at agent i by using the current best expected value, vmax [πi+ , πi− ]. Henceforth, this vmax [πi+ , πi− ] will be referred to as threshold. A policy, πi need not be explored if the upper bound for that policy, ˆv[πi, πi− ] is less than the threshold. This is because the expected value for the best joint policy attainable for that policy will be less than the threshold. On the other hand, when considering a leaf agent, SPIDER computes the best response policy (and consequently its expected value) corresponding to fixed policies of its ancestors, πi− . This is accomplished by computing expected values for each of the policies (corresponding to fixed policies of ancestors) and selecting the highest expected value policy. In Figure 2, SPIDER assigns best response policies to leaf agents at level 3. The policy for the left leaf agent is to perform action East at each time step in the policy, while the policy for the right leaf agent is to perform Off at each time step. These best response policies from the leaf agents yield an actual expected value of 234 for the complete joint policy. Algorithm 1 provides the pseudo code for SPIDER. This algorithm outputs the best joint policy, πi+,∗ (with an expected value greater than threshold) for the agents in Tree(i). Lines 3-8 compute the best response policy of a leaf agent i, while lines 9-23 computes the best response joint policy for agents in Tree(i). This best response computation for a non-leaf agent i includes: (a) Sorting of policies (in descending order) based on heuristic values on line 11; (b) Computing best response policies at each of the children for fixed policies of agent i in lines 16-20; and (c) Maintaining 824 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Algorithm 1 SPIDER(i, πi− , threshold) 1: πi+,∗ ← null 2: Πi ← GET-ALL-POLICIES (horizon, Ai, Ωi) 3: if IS-LEAF(i) then 4: for all πi ∈ Πi do 5: v[πi, πi−] ← JOINT-REWARD (πi, πi−) 6: if v[πi, πi−] > threshold then 7: πi+,∗ ← πi 8: threshold ← v[πi, πi−] 9: else 10: children ← CHILDREN (i) 11: ˆΠi ← UPPER-BOUND-SORT(i, Πi, πi−) 12: for all πi ∈ ˆΠi do 13: ˜πi+ ← πi 14: if ˆv[πi, πi−] < threshold then 15: Go to line 12 16: for all j ∈ children do 17: jThres ← threshold − v[πi, πi−]− Σk∈children,k=j ˆvk[πi, πi−] 18: πj+,∗ ← SPIDER(j, πi πi−, jThres) 19: ˜πi+ ← ˜πi+ πj+,∗ 20: ˆvj[πi, πi−] ← v[πj+,∗, πi πi−] 21: if v[˜πi+, πi−] > threshold then 22: threshold ← v[˜πi+, πi−] 23: πi+,∗ ← ˜πi+ 24: return πi+,∗ Algorithm 2 UPPER-BOUND-SORT(i, Πi, πi− ) 1: children ← CHILDREN (i) 2: ˆΠi ← null /* Stores the sorted list */ 3: for all πi ∈ Πi do 4: ˆv[πi, πi−] ← JOINT-REWARD (πi, πi−) 5: for all j ∈ children do 6: ˆvj[πi, πi−] ← UPPER-BOUND(i, j, πi πi−) 7: ˆv[πi, πi−] + ← ˆvj[πi, πi−] 8: ˆΠi ← INSERT-INTO-SORTED (πi, ˆΠi) 9: return ˆΠi best expected value, joint policy in lines 21-23. Algorithm 2 provides the pseudo code for sorting policies based on the upper bounds on the expected values of joint policies. Expected value for an agent i consists of two parts: value obtained from ancestors and value obtained from its children. Line 4 computes the expected value obtained from ancestors of the agent (using JOINT-REWARD function), while lines 5-7 compute the heuristic value from the children. The sum of these two parts yields an upper bound on the expected value for agent i, and line 8 of the algorithm sorts the policies based on these upper bounds. 4.2 MDP based heuristic function The heuristic function quickly provides an upper bound on the expected value obtainable from the agents in Tree(i). The subtree of agents is a distributed POMDP in itself and the idea here is to construct a centralized MDP corresponding to the (sub-tree) distributed POMDP and obtain the expected value of the optimal policy for this centralized MDP. To reiterate this in terms of the agents in DFS tree interaction structure, we assume full observability for the agents in Tree(i) and for fixed policies of the agents in {Ancestors(i) ∪ i}, we compute the joint value ˆv[πi+ , πi− ] . We use the following notation for presenting the equations for computing upper bounds/heuristic values (for agents i and k): Let Ei− denote the set of links between agents in {Ancestors(i)∪ i} and Tree(i), Ei+ denote the set of links between agents in Tree(i). Also, if l ∈ Ei− , then l1 is the agent in {Ancestors(i)∪ i} and l2 is the agent in Tree(i), that l connects together. We first compact the standard notation: ot k =Ok(st+1 k , st+1 u , πk(ωt k), ωt+1 k ) (1) pt k =Pk(st k, st u, πk(ωt k), st+1 k ) · ot k pt u =P(st u, st+1 u ) st l = st l1 , st l2 , st u ; ωt l = ωt l1 , ωt l2 rt l =Rl(st l , πl1 (ωt l1 ), πl2 (ωt l2 )) vt l =V t πl (st l , st u, ωt l1 , ωt l2 ) Depending on the location of agent k in the agent tree we have the following cases: IF k ∈ {Ancestors(i) ∪ i}, ˆpt k = pt k, (2) IF k ∈ Tree(i), ˆpt k = Pk(st k, st u, πk(ωt k), st+1 k ) IF l ∈ Ei− , ˆrt l = max {al2 } Rl(st l , πl1 (ωt l1 ), al2 ) IF l ∈ Ei+ , ˆrt l = max {al1 ,al2 } Rl(st l , al1 , al2 ) The value function for an agent i executing the joint policy πi+ at time η − 1 is provided by the equation: V η−1 πi+ (sη−1 , ωη−1 ) = l∈Ei− vη−1 l + l∈Ei+ vη−1 l (3) where vη−1 l = rη−1 l + ω η l ,sη pη−1 l1 pη−1 l2 pη−1 u vη l Algorithm 3 UPPER-BOUND (i, j, πj− ) 1: val ← 0 2: for all l ∈ Ej− ∪ Ej+ do 3: if l ∈ Ej− then πl1 ← φ 4: for all s0 l do 5: val + ← startBel[s0 l ]· UPPER-BOUND-TIME (i, s0 l , j, πl1 , ) 6: return val Algorithm 4 UPPER-BOUND-TIME (i, st l , j, πl1 , ωt l1 ) 1: maxV al ← −∞ 2: for all al1 , al2 do 3: if l ∈ Ei− and l ∈ Ej− then al1 ← πl1 (ωt l1 ) 4: val ← GET-REWARD(st l , al1 , al2 ) 5: if t < πi.horizon − 1 then 6: for all st+1 l , ωt+1 l1 do 7: futV al←pt u ˆpt l1 ˆpt l2 8: futV al ∗ ← UPPER-BOUND-TIME(st+1 l , j, πl1 , ωt l1 ωt+1 l1 ) 9: val + ← futV al 10: if val > maxV al then maxV al ← val 11: return maxV al Upper bound on the expected value for a link is computed by modifying the equation 3 to reflect the full observability assumption. This involves removing the observational probability term for agents in Tree(i) and maximizing the future value ˆvη l over the actions of those agents (in Tree(i)). Thus, the equation for the The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 825 computation of the upper bound on a link l, is as follows: IF l ∈ Ei− , ˆvη−1 l =ˆrη−1 l + max al2 ω η l1 ,s η l ˆpη−1 l1 ˆpη−1 l2 pη−1 u ˆvη l IF l ∈ Ei+ , ˆvη−1 l =ˆrη−1 l + max al1 ,al2 s η l ˆpη−1 l1 ˆpη−1 l2 pη−1 u ˆvη l Algorithm 3 and Algorithm 4 provide the algorithm for computing upper bound for child j of agent i, using the equations descirbed above. While Algorithm 4 computes the upper bound on a link given the starting state, Algorithm 3 sums the upper bound values computed over each of the links in Ei− ∪ Ei+ . 4.3 Abstraction Algorithm 5 SPIDER-ABS(i, πi− , threshold) 1: πi+,∗ ← null 2: Πi ← GET-POLICIES (<>, 1) 3: if IS-LEAF(i) then 4: for all πi ∈ Πi do 5: absHeuristic ← GET-ABS-HEURISTIC (πi, πi−) 6: absHeuristic ∗ ← (timeHorizon − πi.horizon) 7: if πi.horizon = timeHorizon and πi.absNodes = 0 then 8: v[πi, πi−] ← JOINT-REWARD (πi, πi−) 9: if v[πi, πi−] > threshold then 10: πi+,∗ ← πi; threshold ← v[πi, πi−] 11: else if v[πi, πi−] + absHeuristic > threshold then 12: ˆΠi ← EXTEND-POLICY (πi, πi.absNodes + 1) 13: Πi + ← INSERT-SORTED-POLICIES ( ˆΠi) 14: REMOVE(πi) 15: else 16: children ← CHILDREN (i) 17: Πi ← UPPER-BOUND-SORT(i, Πi, πi−) 18: for all πi ∈ Πi do 19: ˜πi+ ← πi 20: absHeuristic ← GET-ABS-HEURISTIC (πi, πi−) 21: absHeuristic ∗ ← (timeHorizon − πi.horizon) 22: if πi.horizon = timeHorizon and πi.absNodes = 0 then 23: if ˆv[πi, πi−] < threshold and πi.absNodes = 0 then 24: Go to line 19 25: for all j ∈ children do 26: jThres ← threshold − v[πi, πi−]− Σk∈children,k=j ˆvk[πi, πi−] 27: πj+,∗ ← SPIDER(j, πi πi−, jThres) 28: ˜πi+ ← ˜πi+ πj+,∗; ˆvj[πi, πi−] ← v[πj+,∗, πi πi−] 29: if v[˜πi+, πi−] > threshold then 30: threshold ← v[˜πi+, πi−]; πi+,∗ ← ˜πi+ 31: else if ˆv[πi+, πi−] + absHeuristic > threshold then 32: ˆΠi ← EXTEND-POLICY (πi, πi.absNodes + 1) 33: Πi + ← INSERT-SORTED-POLICIES (ˆΠi) 34: REMOVE(πi) 35: return πi+,∗ In SPIDER, the exploration/pruning phase can only begin after the heuristic (or upper bound) computation and sorting for the policies has ended. We provide an approach to possibly circumvent the exploration of a group of policies based on heuristic computation for one abstract policy, thus leading to an improvement in runtime performance (without loss in solution quality). The important steps in this technique are defining the abstract policy and how heuristic values are computated for the abstract policies. In this paper, we propose two types of abstraction: 1. Horizon Based Abstraction (HBA): Here, the abstract policy is defined as a shorter horizon policy. It represents a group of longer horizon policies that have the same actions as the abstract policy for times less than or equal to the horizon of the abstract policy. In Figure 3(a), a T=1 abstract policy that performs East action, represents a group of T=2 policies, that perform East in the first time step. For HBA, there are two parts to heuristic computation: (a) Computing the upper bound for the horizon of the abstract policy. This is same as the heuristic computation defined by the GETHEURISTIC() algorithm for SPIDER, however with a shorter time horizon (horizon of the abstract policy). (b) Computing the maximum possible reward that can be accumulated in one time step (using GET-ABS-HEURISTIC()) and multiplying it by the number of time steps to time horizon. This maximum possible reward (for one time step) is obtained by iterating through all the actions of all the agents in Tree(i) and computing the maximum joint reward for any joint action. Sum of (a) and (b) is the heuristic value for a HBA abstract policy. 2. Node Based Abstraction (NBA): Here an abstract policy is obtained by not associating actions to certain nodes of the policy tree. Unlike in HBA, this implies multiple levels of abstraction. This is illustrated in Figure 3(b), where there are T=2 policies that do not have an action for observation `TP''. These incomplete T=2 policies are abstractions for T=2 complete policies. Increased levels of abstraction leads to faster computation of a complete joint policy, πroot+ and also to shorter heuristic computation and exploration, pruning phases. For NBA, the heuristic computation is similar to that of a normal policy, except in cases where there is no action associated with policy nodes. In such cases, the immediate reward is taken as Rmax (maximum reward for any action). We combine both the abstraction techniques mentioned above into one technique, SPIDER-ABS. Algorithm 5 provides the algorithm for this abstraction technique. For computing optimal joint policy with SPIDER-ABS, a non-leaf agent i initially examines all abstract T=1 policies (line 2) and sorts them based on abstract policy heuristic computations (line 17). The abstraction horizon is gradually increased and these abstract policies are then explored in descending order of heuristic values and ones that have heuristic values less than the threshold are pruned (lines 23-24). Exploration in SPIDER-ABS has the same definition as in SPIDER if the policy being explored has a horizon of policy computation which is equal to the actual time horizon and if all the nodes of the policy have an action associated with them (lines 25-30). However, if those conditions are not met, then it is substituted by a group of policies that it represents (using EXTEND-POLICY () function) (lines 31-32). EXTEND-POLICY() function is also responsible for initializing the horizon and absNodes of a policy. absNodes represents the number of nodes at the last level in the policy tree, that do not have an action assigned to them. If πi.absNodes = |Ωi|πi.horizon−1 (i.e. total number of policy nodes possible at πi.horizon) , then πi.absNodes is set to zero and πi.horizon is increased by 1. Otherwise, πi.absNodes is increased by 1. Thus, this function combines both HBA and NBA by using the policy variables, horizon and absNodes. Before substituting the abstract policy with a group of policies, those policies are sorted based on heuristic values (line 33). Similar type of abstraction based best response computation is adopted at leaf agents (lines 3-14). 4.4 Value ApproXimation (VAX) In this section, we present an approximate enhancement to SPIDER called VAX. The input to this technique is an approximation parameter , which determines the difference from the optimal solution quality. This approximation parameter is used at each agent for pruning out joint policies. The pruning mechanism in SPIDER and SPIDER-Abs dictates that a joint policy be pruned only if the threshold is exactly greater than the heuristic value. However, the 826 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: Example of abstraction for (a) HBA (Horizon Based Abstraction) and (b) NBA (Node Based Abstraction) idea in this technique is to prune out joint a policy if the following condition is satisfied: threshold + > ˆv[πi , πi− ]. Apart from the pruning condition, VAX is the same as SPIDER/SPIDER-ABS. In the example of Figure 2, if the heuristic value for the second joint policy (or second search tree node) in level 2 were 238 instead of 232, then that policy could not be be pruned using SPIDER or SPIDER-Abs. However, in VAX with an approximation parameter of 5, the joint policy in consideration would also be pruned. This is because the threshold (234) at that juncture plus the approximation parameter (5), i.e. 239 would have been greater than the heuristic value for that joint policy (238). It can be noted from the example (just discussed) that this kind of pruning can lead to fewer explorations and hence lead to an improvement in the overall run-time performance. However, this can entail a sacrifice in the quality of the solution because this technique can prune out a candidate optimal solution. A bound on the error introduced by this approximate algorithm as a function of , is provided by Proposition 3. 4.5 Percentage ApproXimation (PAX) In this section, we present the second approximation enhancement over SPIDER called PAX. Input to this technique is a parameter, δ that represents the minimum percentage of the optimal solution quality that is desired. Output of this technique is a policy with an expected value that is at least δ% of the optimal solution quality. A policy is pruned if the following condition is satisfied: threshold > δ 100 ˆv[πi , πi− ]. Like in VAX, the only difference between PAX and SPIDER/SPIDER-ABS is this pruning condition. Again in Figure 2, if the heuristic value for the second search tree node in level 2 were 238 instead of 232, then PAX with an input parameter of 98% would be able to prune that search tree node (since 98 100 ∗238 < 234). This type of pruning leads to fewer explorations and hence an improvement in run-time performance, while potentially leading to a loss in quality of the solution. Proposition 4 provides the bound on quality loss. 4.6 Theoretical Results PROPOSITION 1. Heuristic provided using the centralized MDP heuristic is admissible. Proof. For the value provided by the heuristic to be admissible, it should be an over estimate of the expected value for a joint policy. Thus, we need to show that: For l ∈ Ei+ ∪ Ei− : ˆvt l ≥ vt l (refer to notation in Section 4.2) We use mathematical induction on t to prove this. Base case: t = T − 1. Irrespective of whether l ∈ Ei− or l ∈ Ei+ , ˆrt l is computed by maximizing over all actions of the agents in Tree(i), while rt l is computed for fixed policies of the same agents. Hence, ˆrt l ≥ rt l and also ˆvt l ≥ vt l . Assumption: Proposition holds for t = η, where 1 ≤ η < T − 1. We now have to prove that the proposition holds for t = η − 1. We show the proof for l ∈ Ei− and similar reasoning can be adopted to prove for l ∈ Ei+ . The heuristic value function for l ∈ Ei− is provided by the following equation: ˆvη−1 l =ˆrη−1 l + max al2 ω η l1 ,s η l ˆpη−1 l1 ˆpη−1 l2 pη−1 u ˆvη l Rewriting the RHS and using Eqn 2 (in Section 4.2) =ˆrη−1 l + max al2 ω η l1 ,s η l pη−1 u pη−1 l1 ˆpη−1 l2 ˆvη l =ˆrη−1 l + ω η l1 ,s η l pη−1 u pη−1 l1 max al2 ˆpη−1 l2 ˆvη l Since maxal2 ˆpη−1 l2 ˆvη l ≥ ωl2 oη−1 l2 ˆpη−1 l2 ˆvη l and pη−1 l2 = oη−1 l2 ˆpη−1 l2 ≥ˆrη−1 l + ω η l1 ,s η l pη−1 u pη−1 l1 ωl2 pη−1 l2 ˆvη l Since ˆvη l ≥ vη l (from the assumption) ≥ˆrη−1 l + ω η l1 ,s η l pη−1 u pη−1 l1 ωl2 pη−1 l2 vη l Since ˆrη−1 l ≥ rη−1 l (by definition) ≥rη−1 l + ω η l1 ,s η l pη−1 u pη−1 l1 ωl2 pη−1 l2 vη l =rη−1 l + (ω η l ,s η l ) pη−1 u pη−1 l1 pη−1 l2 vη l = vη−1 l Thus proved. PROPOSITION 2. SPIDER provides an optimal solution. Proof. SPIDER examines all possible joint policies given the interaction structure of the agents. The only exception being when a joint policy is pruned based on the heuristic value. Thus, as long as a candidate optimal policy is not pruned, SPIDER will return an optimal policy. As proved in Proposition 1, the expected value for a joint policy is always an upper bound. Hence when a joint policy is pruned, it cannot be an optimal solution. PROPOSITION 3. Error bound on the solution quality for VAX (implemented over SPIDER-ABS) with an approximation parameter of is ρ , where ρ is the number of leaf nodes in the DFS tree. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 827 Proof. We prove this proposition using mathematical induction on the depth of the DFS tree. Base case: depth = 1 (i.e. one node). Best response is computed by iterating through all policies, Πk. A policy,πk is pruned if ˆv[πk, πk− ] < threshold + . Thus the best response policy computed by VAX would be at most away from the optimal best response. Hence the proposition holds for the base case. Assumption: Proposition holds for d, where 1 ≤ depth ≤ d. We now have to prove that the proposition holds for d + 1. Without loss of generality, lets assume that the root node of this tree has k children. Each of this children is of depth ≤ d, and hence from the assumption, the error introduced in kth child is ρk , where ρk is the number of leaf nodes in kth child of the root. Therefore, ρ = k ρk, where ρ is the number of leaf nodes in the tree. In SPIDER-ABS, threshold at the root agent, thresspider = k v[πk+ , πk− ]. However, with VAX the threshold at the root agent will be (in the worst case), threshvax = k v[πk+ , πk− ]− k ρk . Hence, with VAX a joint policy is pruned at the root agent if ˆv[πroot, πroot− ] < threshvax + ⇒ ˆv[πroot, πroot− ] < threshspider − (( k ρk) − 1) ≤ threshspider − ( k ρk) ≤ threshspider − ρ . Hence proved. PROPOSITION 4. For PAX (implemented over SPIDER-ABS) with an input parameter of δ, the solution quality is at least δ 100 v[πroot+,∗ ], where v[πroot+,∗ ] denotes the optimal solution quality. Proof. We prove this proposition using mathematical induction on the depth of the DFS tree. Base case: depth = 1 (i.e. one node). Best response is computed by iterating through all policies, Πk. A policy,πk is pruned if δ 100 ˆv[πk, πk− ] < threshold. Thus the best response policy computed by PAX would be at least δ 100 times the optimal best response. Hence the proposition holds for the base case. Assumption: Proposition holds for d, where 1 ≤ depth ≤ d. We now have to prove that the proposition holds for d + 1. Without loss of generality, lets assume that the root node of this tree has k children. Each of this children is of depth ≤ d, and hence from the assumption, the solution quality in the kth child is at least δ 100 v[πk+,∗ , πk− ] for PAX. With SPIDER-ABS, a joint policy is pruned at the root agent if ˆv[πroot, πroot− ] < k v[πk+,∗ , πk− ]. However with PAX, a joint policy is pruned if δ 100 ˆv[πroot, πroot− ] < k δ 100 v[πk+,∗ , πk− ] ⇒ ˆv[πroot, πroot− ] < k v[πk+,∗ , πk− ]. Since the pruning condition at the root agent in PAX is the same as the one in SPIDER-ABS, there is no error introduced at the root agent and all the error is introduced in the children. Thus, overall solution quality is at least δ 100 of the optimal solution. Hence proved. 5. EXPERIMENTAL RESULTS All our experiments were conducted on the sensor network domain from Section 2. The five network configurations employed are shown in Figure 4. Algorithms that we experimented with are GOA, SPIDER, SPIDER-ABS, PAX and VAX. We compare against GOA because it is the only global optimal algorithm that considers more than two agents. We performed two sets of experiments: (i) firstly, we compared the run-time performance of the above algorithms and (ii) secondly, we experimented with PAX and VAX to study the tradeoff between run-time and solution quality. Experiments were terminated after 10000 seconds1 . Figure 5(a) provides run-time comparisons between the optimal algorithms GOA, SPIDER, SPIDER-Abs and the approximate algorithms, PAX ( of 30) and VAX(δ of 80). X-axis denotes the 1 Machine specs for all experiments: Intel Xeon 3.6 GHZ processor, 2GB RAM sensor network configuration used, while Y-axis indicates the runtime (on a log-scale). The time horizon of policy computation was 3. For each configuration (3-chain, 4-chain, 4-star and 5-star), there are five bars indicating the time taken by GOA, SPIDER, SPIDERAbs, PAX and VAX. GOA did not terminate within the time limit for 4-star and 5-star configurations. SPIDER-Abs dominated the SPIDER and GOA for all the configurations. For instance, in the 3chain configuration, SPIDER-ABS provides 230-fold speedup over GOA and 2-fold speedup over SPIDER and for the 4-chain configuration it provides 58-fold speedup over GOA and 2-fold speedup over SPIDER. The two approximation approaches, VAX and PAX provided further improvement in performance over SPIDER-Abs. For instance, in the 5-star configuration VAX provides a 15-fold speedup and PAX provides a 8-fold speedup over SPIDER-Abs. Figures 5(b) provides a comparison of the solution quality obtained using the different algorithms for the problems tested in Figure 5(a). X-axis denotes the sensor network configuration while Y-axis indicates the solution quality. Since GOA, SPIDER, and SPIDER-Abs are all global optimal algorithms, the solution quality is the same for all those algorithms. For 5-P configuration, the global optimal algorithms did not terminate within the limit of 10000 seconds, so the bar for optimal quality indicates an upper bound on the optimal solution quality. With both the approximations, we obtained a solution quality that was close to the optimal solution quality. In 3-chain and 4-star configurations, it is remarkable that both PAX and VAX obtained almost the same actual quality as the global optimal algorithms, despite the approximation parameter and δ. For other configurations as well, the loss in quality was less than 20% of the optimal solution quality. Figure 5(c) provides the time to solution with PAX (for varying epsilons). X-axis denotes the approximation parameter, δ (percentage to optimal) used, while Y-axis denotes the time taken to compute the solution (on a log-scale). The time horizon for all the configurations was 4. As δ was decreased from 70 to 30, the time to solution decreased drastically. For instance, in the 3-chain case there was a total speedup of 170-fold when the δ was changed from 70 to 30. Interestingly, even with a low δ of 30%, the actual solution quality remained equal to the one obtained at 70%. Figure 5(d) provides the time to solution for all the configurations with VAX (for varying epsilons). X-axis denotes the approximation parameter, used, while Y-axis denotes the time taken to compute the solution (on a log-scale). The time horizon for all the configurations was 4. As was increased, the time to solution decreased drastically. For instance, in the 4-star case there was a total speedup of 73-fold when the was changed from 60 to 140. Again, the actual solution quality did not change with varying epsilon. Figure 4: Sensor network configurations 828 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 5: Comparison of GOA, SPIDER, SPIDER-Abs and VAX for T = 3 on (a) Runtime and (b) Solution quality; (c) Time to solution for PAX with varying percentage to optimal for T=4 (d) Time to solution for VAX with varying epsilon for T=4 6. SUMMARY AND RELATED WORK This paper presents four algorithms SPIDER, SPIDER-ABS, PAX and VAX that provide a novel combination of features for policy search in distributed POMDPs: (i) exploiting agent interaction structure given a network of agents (i.e. easier scale-up to larger number of agents); (ii) using branch and bound search with an MDP based heuristic function; (iii) utilizing abstraction to improve runtime performance without sacrificing solution quality; (iv) providing a priori percentage bounds on quality of solutions using PAX; and (v) providing expected value bounds on the quality of solutions using VAX. These features allow for systematic tradeoff of solution quality for run-time in networks of agents operating under uncertainty. Experimental results show orders of magnitude improvement in performance over previous global optimal algorithms. Researchers have typically employed two types of techniques for solving distributed POMDPs. The first set of techniques compute global optimal solutions. Hansen et al. [5] present an algorithm based on dynamic programming and iterated elimination of dominant policies, that provides optimal solutions for distributed POMDPs. Szer et al. [13] provide an optimal heuristic search method for solving Decentralized POMDPs. This algorithm is based on the combination of a classical heuristic search algorithm, A∗ and decentralized control theory. The key differences between SPIDER and MAA* are: (a) Enhancements to SPIDER (VAX and PAX) provide for quality guaranteed approximations, while MAA* is a global optimal algorithm and hence involves significant computational complexity; (b) Due to MAA*``s inability to exploit interaction structure, it was illustrated only with two agents. However, SPIDER has been illustrated for networks of agents; and (c) SPIDER explores the joint policy one agent at a time, while MAA* expands it one time step at a time (simultaneously for all the agents). The second set of techniques seek approximate policies. EmeryMontemerlo et al. [4] approximate POSGs as a series of one-step Bayesian games using heuristics to approximate future value, trading off limited lookahead for computational efficiency, resulting in locally optimal policies (with respect to the selected heuristic). Nair et al. [9]``s JESP algorithm uses dynamic programming to reach a local optimum solution for finite horizon decentralized POMDPs. Peshkin et al. [11] and Bernstein et al. [2] are examples of policy search techniques that search for locally optimal policies. Though all the above techniques improve the efficiency of policy computation considerably, they are unable to provide error bounds on the quality of the solution. This aspect of quality bounds differentiates SPIDER from all the above techniques. Acknowledgements. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division under Contract No. NBCHD030010. The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. 7. REFERENCES [1] R. Becker, S. Zilberstein, V. Lesser, and C.V. Goldman. Solving transition independent decentralized Markov decision processes. JAIR, 22:423-455, 2004. [2] D. S. Bernstein, E.A. Hansen, and S. Zilberstein. Bounded policy iteration for decentralized POMDPs. In IJCAI, 2005. [3] D. S. Bernstein, S. Zilberstein, and N. Immerman. The complexity of decentralized control of MDPs. In UAI, 2000. [4] R. Emery-Montemerlo, G. Gordon, J. Schneider, and S. Thrun. Approximate solutions for partially observable stochastic games with common payoffs. In AAMAS, 2004. [5] E. Hansen, D. Bernstein, and S. Zilberstein. Dynamic programming for partially observable stochastic games. In AAAI, 2004. [6] V. Lesser, C. Ortiz, and M. Tambe. Distributed sensor nets: A multiagent perspective. Kluwer, 2003. [7] R. Maheswaran, M. Tambe, E. Bowring, J. Pearce, and P. Varakantham. Taking dcop to the real world : Efficient complete solutions for distributed event scheduling. In AAMAS, 2004. [8] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo. An asynchronous complete method for distributed constraint optimization. In AAMAS, 2003. [9] R. Nair, D. Pynadath, M. Yokoo, M. Tambe, and S. Marsella. Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings. In IJCAI, 2003. [10] R. Nair, P. Varakantham, M. Tambe, and M. Yokoo. Networked distributed POMDPs: A synthesis of distributed constraint optimization and POMDPs. In AAAI, 2005. [11] L. Peshkin, N. Meuleau, K.-E. Kim, and L. Kaelbling. Learning to cooperate via policy search. In UAI, 2000. [12] A. Petcu and B. Faltings. A scalable method for multiagent constraint optimization. In IJCAI, 2005. [13] D. Szer, F. Charpillet, and S. Zilberstein. MAA*: A heuristic search algorithm for solving decentralized POMDPs. In IJCAI, 2005. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 829
Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies ABSTRACT Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions. Though this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality. A second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost. This paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time. Experimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms. 1. INTRODUCTION Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are emerging as a popular approach for modeling sequential decision making in teams operating under uncertainty [9, 4, 1, 2, 13]. The uncertainty arises on account of non determinism in the outcomes of actions and because the world state may only be partially (or incorrectly) observable. Unfortunately, as shown by Bernstein et al. [3], the problem of finding the optimal joint policy for general distributed POMDPs is NEXP-Complete. Researchers have attempted two different types of approaches towards solving these models. The first category consists of highly efficient approximate techniques, that may not reach globally optimal solutions [2, 9, 11]. The key problem with these techniques has been their inability to provide any guarantees on the quality of the solution. In contrast, the second less popular category of approaches has focused on a global optimal result [13, 5, 10]. Though these approaches obtain optimal solutions, they typically consider only two agents. Furthermore, they fail to exploit structure in the interactions of the agents and hence are severely hampered with respect to scalability when considering more than two agents. To address these problems with the existing approaches, we propose approximate techniques that provide guarantees on the quality of the solution while focussing on a network of more than two agents. We first propose the basic SPIDER (Search for Policies In Distributed EnviRonments) algorithm. There are two key novel features in SPIDER: (i) it is a branch and bound heuristic search technique that uses a MDP-based heuristic function to search for an optimal joint policy; (ii) it exploits network structure of agents by organizing agents into a Depth First Search (DFS) pseudo tree and takes advantage of the independence in the different branches of the DFS tree. We then provide three enhancements to improve the efficiency of the basic SPIDER algorithm while providing guarantees on the quality of the solution. The first enhancement uses abstractions for speedup, but does not sacrifice solution quality. In particular, it initially performs branch and bound search on abstract policies and then extends to complete policies. The second enhancement obtains speedups by sacrificing solution quality, but within an input parameter that provides the tolerable expected value difference from the optimal solution. The third enhancement is again based on bounding the search for efficiency, however with a tolerance parameter that is provided as a percentage of optimal. We experimented with the sensor network domain presented in Nair et al. [10], a domain representative of an important class of problems with networks of agents working in uncertain environments. In our experiments, we illustrate that SPIDER dominates an existing global optimal approach called GOA [10], the only known global optimal algorithm with demonstrated experimental results for more than two agents. Furthermore, we demonstrate that abstraction improves the performance of SPIDER significantly (while providing optimal solutions). We finally demonstrate a key feature of SPIDER: by utilizing the approximation enhancements it enables principled tradeoffs in run-time versus solution quality. 2. DOMAIN: DISTRIBUTED SENSOR NETS Distributed sensor networks are a large, important class of domains that motivate our work. This paper focuses on a set of target tracking problems that arise in certain types of sensor networks [6] first introduced in [10]. Figure 1 shows a specific problem instance within this type consisting of three sensors. Here, each sensor node can scan in one of four directions: North, South, East or West (see Figure 1). To track a target and obtain associated reward, two sensors with overlapping scanning areas must coordinate by scanning the same area simultaneously. In Figure 1, to track a target in Loc11, sensor1 needs to scan ` East' and sensor2 needs to scan ` West' simultaneously. Thus, sensors have to act in a coordinated fashion. We assume that there are two independent targets and that each target's movement is uncertain and unaffected by the sensor agents. Based on the area it is scanning, each sensor receives observations that can have false positives and false negatives. The sensors' observations and transitions are independent of each other's actions e.g.the observations that sensor1 receives are independent of sensor2's actions. Each agent incurs a cost for scanning whether the target is present or not, but no cost if it turns off. Given the sensors' observational uncertainty, the targets' uncertain transitions and the distributed nature of the sensor nodes, these sensor nets provide a useful domains for applying distributed POMDP models. Figure 1: A 3-chain sensor configuration 3. BACKGROUND 3.1 Model: Network Distributed POMDP The ND-POMDP model was introduced in [10], motivated by domains such as the sensor networks introduced in Section 2. It is defined as the tuple (S, A, P, Ω, O, R, b), where S = x1 <i <nSi x Su is the set of world states. Si refers to the set of local states of agent i and Su is the set of unaffectable states. Unaffectable state refers to that part of the world state that cannot be affected by the agents' actions, e.g. environmental factors like target locations that no agent can control. A = x1 <i <nAi is the set of joint actions, where Ai is the set of action for agent i. ND-POMDP assumes transition independence, where the transition function is defined as P (s, a, s;-RRB- = Pu (su, s; u) • rI1 <i <n Pi (si, su, ai, s; i), where a = (a1,..., an) is the joint action performed in state s = (s1,..., sn, su) and s; = (s; 1,..., s; n, s; u) is the resulting state. Ω = x1 <i <nΩi is the set of joint observations where Ωi is the set of observations for agents i. Observational independence is assumed in ND-POMDPs i.e., the joint observation function is defined as O (s, a, ω) = rI1 <i <n Oi (si, su, ai, ωi), where s = (s1,..., sn, su) is the world state that results from the agents performing a = (a1,..., an) in the previous state, and ω =-LRB- ω1,..., ωn) E Ω is the observation received in state s. This implies that each agent's observation depends only on the unaffectable state, its local action and on its resulting local state. The reward function, R, is defined as l Rl (sl1,..., slr, su, (al1,..., alr)), where each l could refer to any sub-group of agents and r = IlI. Based on the reward function, an interaction hypergraph is constructed. A hyper-link, l, exists between a subset of agents for all Rl that comprise R. The interaction hypergraph is defined as G = (Ag, E), where the agents, Ag, are the vertices and E = 1lIl C Ag n Rl is a component of R} are the edges. defined as b (s) = bu (su) • H The initial belief state (distribution over the initial state), b, is 1 <i <n bi (si), where bu and bi refer to the distribution over initial unaffectable state and agent i's initial belief state, respectively. The goal in ND-POMDP is to compute the joint policy π = (π1,..., πn) that maximizes team's expected reward over a finite horizon T starting from the belief state b. An ND-POMDP is similar to an n-ary Distributed Constraint Optimization Problem (DCOP) [8, 12] where the variable at each node represents the policy selected by an individual agent, πi with the domain of the variable being the set of all local policies, Πi. The reward component Rl where IlI = 1 can be thought of as a local constraint while the reward component Rl where l> 1 corresponds to a non-local constraint in the constraint graph. 3.2 Algorithm: Global Optimal Algorithm (GOA) In previous work, GOA has been defined as a global optimal algorithm for ND-POMDPs [10]. We will use GOA in our experimental comparisons, since GOA is a state-of-the-art global optimal algorithm, and in fact the only one with experimental results available for networks of more than two agents. GOA borrows from a global optimal DCOP algorithm called DPOP [12]. GOA's message passing follows that of DPOP. The first phase is the UTIL propagation, where the utility messages, in this case values of policies, are passed up from the leaves to the root. Value for a policy at an agent is defined as the sum of best response values from its children and the joint policy reward associated with the parent policy. Thus, given a policy for a parent node, GOA requires an agent to iterate through all its policies, finding the best response policy and returning the value to the parent--while at the parent node, to find the best policy, an agent requires its children to return their best responses to each of its policies. This UTIL propagation process is repeated at each level in the tree, until the root exhausts all its policies. In the second phase of VALUE propagation, where the optimal policies are passed down from the root till the leaves. GOA takes advantage of the local interactions in the interaction graph, by pruning out unnecessary joint policy evaluations (associated with nodes not connected directly in the tree). Since the interaction graph captures all the reward interactions among agents and as this algorithm iterates through all the relevant joint policy evaluations, this algorithm yields a globally optimal solution. 4. SPIDER As mentioned in Section 3.1, an ND-POMDP can be treated as a DCOP, where the goal is to compute a joint policy that maximizes the overall joint reward. The brute-force technique for computing an optimal policy would be to examine the expected values for all possible joint policies. The key idea in SPIDER is to avoid computation of expected values for the entire space of joint policies, by utilizing upper bounds on the expected values of policies and the interaction structure of the agents. Akin to some of the algorithms for DCOP [8, 12], SPIDER has a pre-processing step that constructs a DFS tree corresponding to the given interaction structure. Note that these DFS trees are pseudo trees [12] that allow links between ancestors and children. We employ the Maximum Constrained Node (MCN) heuristic used in the DCOP algorithm, ADOPT [8], however other heuristics (such as MLSP heuristic from [7]) can also be employed. MCN heuristic tries to place agents with more number of constraints at the top of the tree. This tree governs how the search for the optimal joint pol The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 823 icy proceeds in SPIDER. The algorithms presented in this paper are easily extendable to hyper-trees, however for expository purposes, we assume binary trees. SPIDER is an algorithm for centralized planning and distributed execution in distributed POMDPs. In this paper, we employ the following notation to denote policies and expected values: Ancestors (i) agents from i to the root (not including i). Tree (i) agents in the sub-tree (not including i) for which i is the root. .7 rroot + joint policy of all agents. .7 r' + joint policy of all agents in Tree (i) U i. .7 r'--joint policy of agents that are in Ancestors (i). .7 r' policy of the ith agent. ˆv [.7 r', .7 r'--] upper bound on the expected value for .7 r' + given .7 r' and policies of ancestor agents i.e. .7 r'--. ˆvj [.7 r', .7 r'--] upper bound on the expected value for .7 r' + from the jth child. v [.7 r', .7 r'--] expected value for .7 r' given policies of ancestor agents, .7 r'--. v [.7 r' +, .7 r'--] expected value for .7 r' + given policies of ancestor agents, .7 r'--. vj [.7 r' +, .7 r'--] expected value for .7 r' + from the jth child. Figure 2: Execution of SPIDER, an example 4.1 Outline of SPIDER SPIDER is based on the idea of branch and bound search, where the nodes in the search tree represent partial/complete joint policies. Figure 2 shows an example search tree for the SPIDER algorithm, using an example of the three agent chain. Before SPIDER begins its search we create a DFS tree (i.e. pseudo tree) from the three agent chain, with the middle agent as the root of this tree. SPIDER exploits the structure of this DFS tree while engaging in its search. Note that in our example figure, each agent is assigned a policy with T = 2. Thus, each rounded rectange (search tree node) indicates a partial/complete joint policy, a rectangle indicates an agent and the ovals internal to an agent show its policy. Heuristic or actual expected value for a joint policy is indicated in the top right corner of the rounded rectangle. If the number is italicized and underlined, it implies that the actual expected value of the joint policy is provided. SPIDER begins with no policy assigned to any of the agents (shown in the level 1 of the search tree). Level 2 of the search tree indicates that the joint policies are sorted based on upper bounds computed for root agent's policies. Level 3 shows one SPIDER search node with a complete joint policy (a policy assigned to each of the agents). The expected value for this joint policy is used to prune out the nodes in level 2 (the ones with upper bounds <234) When creating policies for each non-leaf agent i, SPIDER potentially performs two steps: 1. Obtaining upper bounds and sorting: In this step, agent i computes upper bounds on the expected values, ˆv [.7 r', .7 r'--] of the joint policies .7 r' + corresponding to each of its policy .7 r' and fixed ancestor policies. An MDP based heuristic is used to compute these upper bounds on the expected values. Detailed description about this MDP heuristic is provided in Section 4.2. All policies of agent i, II' are then sorted based on these upper bounds (also referred to as heuristic values henceforth) in descending order. Exploration of these policies (in step 2 below) are performed in this descending order. As indicated in the level 2 of the search tree (of Figure 2), all the joint policies are sorted based on the heuristic values, indicated in the top right corner of each joint policy. The intuition behind sorting and then exploring policies in descending order of upper bounds, is that the policies with higher upper bounds could yield joint policies with higher expected values. 2. Exploration and Pruning: Exploration implies computing the best response joint policy .7 r' +, = corresponding to fixed ancestor policies of agent i, .7 r'--. This is performed by iterating through all policies of agent i i.e. II' and summing two quantities for each policy: (i) the best response for all of i's children (obtained by per forming steps 1 and 2 at each of the child nodes); (ii) the expected value obtained by i for fixed policies of ancestors. Thus, exploration of a policy .7 r' yields actual expected value of a joint policy, .7 r' + represented as v [.7 r' +, .7 r'--]. The policy with the highest expected value is the best response policy. Pruning refers to avoiding exploring all policies (or computing expected values) at agent i by using the current best expected value, vmax [.7 r' +, .7 r'--]. Henceforth, this vmax [.7 r' +, .7 r'--] will be referred to as threshold. A policy, .7 r' need not be explored if the upper bound for that policy, ˆv [.7 r', .7 r'--] is less than the threshold. This is because the expected value for the best joint policy attainable for that policy will be less than the threshold. On the other hand, when considering a leaf agent, SPIDER computes the best response policy (and consequently its expected value) corresponding to fixed policies of its ancestors, .7 r'--. This is accomplished by computing expected values for each of the policies (corresponding to fixed policies of ancestors) and selecting the highest expected value policy. In Figure 2, SPIDER assigns best response policies to leaf agents at level 3. The policy for the left leaf agent is to perform action "East" at each time step in the policy, while the policy for the right leaf agent is to perform "Off" at each time step. These best response policies from the leaf agents yield an actual expected value of 234 for the complete joint policy. Algorithm 1 provides the pseudo code for SPIDER. This algorithm outputs the best joint policy, .7 r' +, = (with an expected value greater than threshold) for the agents in Tree (i). Lines 3-8 compute the best response policy of a leaf agent i, while lines 9-23 computes the best response joint policy for agents in Tree (i). This best response computation for a non-leaf agent i includes: (a) Sorting of policies (in descending order) based on heuristic values on line 11; (b) Computing best response policies at each of the children for fixed policies of agent i in lines 16-20; and (c) Maintaining 824 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Algorithm 1 SPIDER (i, 7ri −, threshold) 1: πi +, ∗ ← null 2: Πi ← GET-ALL-POLICIES (horizon, Ai, Ωi) 3: if IS-LEAF (i) then 4: for all πi ∈ Πi do 5: v [πi, πi −] ← JOINT-REWARD (πi, πi −) 6: if v [πi, πi −]> threshold then 7: πi +, ∗ ← πi 8: threshold ← v [πi, πi −] 9: else 10: children ← CHILDREN (i) 11: ˆΠi ← UPPER-BOUND-SORT (i, Πi, πi −) 12: for all πi ∈ ˆΠi do 13: ˜πi + ← πi 14: if ˆv [πi, πi −] <threshold then 15: Go to line 12 16: for all j ∈ children do 17: jThres ← threshold − v [πi, πi −] − Σk ∈ children, k ~ = j ˆvk [πi, πi −] 18: πj +, ∗ ← SPIDER (j, πi ~ πi −, jThres) 19: ˜πi + ← ˜πi + ~ πj +, ∗ 20: ˆvj [πi, πi −] ← v [πj +, ∗, πi ~ πi −] Algorithm 2 UPPER-BOUND-SORT (i, Ili, 7ri −) 1: children ← CHILDREN (i) 2: ˆΠi ← null / * Stores the sorted list * / 3: for all πi ∈ Πi do 4: ˆv [πi, πi −] ← JOINT-REWARD (πi, πi −) 5: for all j ∈ children do 6: ˆvj [πi, πi −] ← UPPER-BOUND (i, j, πi ~ πi −) 7: ˆv [πi, πi −] + ← ˆvj [πi, πi −] 8: ˆΠi ← INSERT-INTO-SORTED (πi, ˆΠi) 9: return ˆΠi best expected value, joint policy in lines 21-23. Algorithm 2 provides the pseudo code for sorting policies based on the upper bounds on the expected values of joint policies. Expected value for an agent i consists of two parts: value obtained from ancestors and value obtained from its children. Line 4 computes the expected value obtained from ancestors of the agent (using JOINT-REWARD function), while lines 5-7 compute the heuristic value from the children. The sum of these two parts yields an upper bound on the expected value for agent i, and line 8 of the algorithm sorts the policies based on these upper bounds. 4.2 MDP based heuristic function The heuristic function quickly provides an upper bound on the expected value obtainable from the agents in Tree (i). The subtree of agents is a distributed POMDP in itself and the idea here is to construct a centralized MDP corresponding to the (sub-tree) distributed POMDP and obtain the expected value of the optimal policy for this centralized MDP. To reiterate this in terms of the agents in DFS tree interaction structure, we assume full observability for the agents in Tree (i) and for fixed policies of the agents in {Ancestors (i) ∪ i}, we compute the joint value ˆv [7ri +, 7ri −]. We use the following notation for presenting the equations for computing upper bounds/heuristic values (for agents i and k): Let Ei − denote the set of links between agents in {Ancestors (i) ∪ i} and Tree (i), Ei + denote the set of links between agents in Tree (i). Also, if l ∈ Ei −, then l1 is the agent in {Ancestors (i) ∪ Depending on the location of agent k in the agent tree we have the following cases: The value function for an agent i executing the joint policy 7ri + at time η − 1 is provided by the equation: 1: val ← 0 2: for all l ∈ Ej − ∪ Ej + do 3: if l ∈ Ej − then πl1 ← φ 4: for all s0l do + 5: val ← startBel [s0l] · UPPER-BOUND-TIME (i, s0l, j, πl1, ~ ~) 6: return val Algorithm 4 UPPER-BOUND-TIME (i, stl, j, 7rl1, ~ ωtl1) 1: maxVal ← − ∞ 2: for all al1, al2 do 3: if l ∈ Ei − and l ∈ Ej − then al1 ← πl1 (~ ωtl1) 4: val ← GET-REWARD (stl, al1, al2) 5: if t <πi.horizon − 1 then 6: for all st +1 l, ωt +1 l1 do 7: futV al ← ptu ˆptl1 ˆptl2 8: futVal ← ∗ UPPER-BOUND-TIME (st +1 9: val ← + futVal 10: if val> maxVal then maxV al ← val 11: return maxV al Upper bound on the expected value for a link is computed by modifying the equation 3 to reflect the full observability assumption. This involves removing the observational probability term for agents in Tree (i) and maximizing the future value ˆvηl over the actions of those agents (in Tree (i)). Thus, the equation for the The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 825 computation of the upper bound on a link l, is as follows: Algorithm 3 and Algorithm 4 provide the algorithm for computing upper bound for child j of agent i, using the equations descirbed above. While Algorithm 4 computes the upper bound on a link given the starting state, Algorithm 3 sums the upper bound values computed over each of the links in Ei − ∪ Ei +. 4.3 Abstraction Algorithm 5 SPIDER-ABS (i, .7 ri −, threshold) 1: πi +, ∗ ← null 2: Πi ← GET-POLICIES (<>, 1) 3: if IS-LEAF (i) then 4: for all πi ∈ Πi do 5: absHeuristic ← GET-ABS-HEURISTIC (πi, πi −) 6: absHeuristic ∗ ← (timeHorizon − πi.horizon) 7: if πi.horizon = timeHorizon and πi.absNodes = 0 then 8: v [πi, πi −] ← JOINT-REWARD (πi, πi −) 9: if v [πi, πi −]> threshold then 10: πi +, ∗ ← πi; threshold ← v [πi, πi −] 12: ˆΠi ← EXTEND-POLICY (πi, πi.absNodes + 1) 13: Πi ← INSERT-SORTED-POLICIES (ˆΠi) + 14: REMOVE (πi) 15: else 16: children ← CHILDREN (i) 17: Πi ← UPPER-BOUND-SORT (i, Πi, πi −) 18: for all πi ∈ Πi do 19: ˜πi + ← πi 20: absHeuristic ← GET-ABS-HEURISTIC (πi, πi −) 21: absHeuristic ← ∗ (timeHorizon − πi.horizon) 22: if πi.horizon = timeHorizon and πi.absNodes = 0 then 23: if ˆv [πi, πi −] <threshold and πi.absNodes = 0 then 24: Go to line 19 25: for all j ∈ children do 26: jThres ← threshold − v [πi, πi −] − Σk ∈ children, k ~ = j ˆvk [πi, π i −] 27: πj +, ∗ ← SPIDER (j, πi ~ πi −, jThres) 28: ˜πi + ← ˜πi + ~ πj +, ∗; ˆvj [πi, πi −] ← v [πj +, ∗, πi ~ πi −] 29: if v [˜πi +, πi −]> threshold then 30: threshold ← v [˜πi +, πi −]; πi +, ∗ ← ˜πi + 31: else if ˆv [πi +, πi −] + absHeuristic> threshold then 32: ˆΠi ← EXTEND-POLICY (πi, πi.absNodes + 1) 33: Πi ← INSERT-SORTED-POLICIES (ˆΠi) + 34: REMOVE (πi) 35: return πi +, ∗ In SPIDER, the exploration/pruning phase can only begin after the heuristic (or upper bound) computation and sorting for the policies has ended. We provide an approach to possibly circumvent the exploration of a group of policies based on heuristic computation for one abstract policy, thus leading to an improvement in runtime performance (without loss in solution quality). The important steps in this technique are defining the abstract policy and how heuristic values are computated for the abstract policies. In this paper, we propose two types of abstraction: 1. Horizon Based Abstraction (HBA): Here, the abstract policy is defined as a shorter horizon policy. It represents a group of longer horizon policies that have the same actions as the abstract policy for times less than or equal to the horizon of the abstract policy. In Figure 3 (a), a T = 1 abstract policy that performs "East" action, represents a group of T = 2 policies, that perform "East" in the first time step. For HBA, there are two parts to heuristic computation: (a) Computing the upper bound for the horizon of the abstract policy. This is same as the heuristic computation defined by the GETHEURISTIC () algorithm for SPIDER, however with a shorter time horizon (horizon of the abstract policy). (b) Computing the maximum possible reward that can be accumulated in one time step (using GET-ABS-HEURISTIC ()) and multiplying it by the number of time steps to time horizon. This maximum possible reward (for one time step) is obtained by iterating through all the actions of all the agents in Tree (i) and computing the maximum joint reward for any joint action. Sum of (a) and (b) is the heuristic value for a HBA abstract policy. 2. Node Based Abstraction (NBA): Here an abstract policy is obtained by not associating actions to certain nodes of the policy tree. Unlike in HBA, this implies multiple levels of abstraction. This is illustrated in Figure 3 (b), where there are T = 2 policies that do not have an action for observation ` TP'. These incomplete T = 2 policies are abstractions for T = 2 complete policies. Increased levels of abstraction leads to faster computation of a complete joint policy, .7 rroot + and also to shorter heuristic computation and exploration, pruning phases. For NBA, the heuristic computation is similar to that of a normal policy, except in cases where there is no action associated with policy nodes. In such cases, the immediate reward is taken as Rmax (maximum reward for any action). We combine both the abstraction techniques mentioned above into one technique, SPIDER-ABS. Algorithm 5 provides the algorithm for this abstraction technique. For computing optimal joint policy with SPIDER-ABS, a non-leaf agent i initially examines all abstract T = 1 policies (line 2) and sorts them based on abstract policy heuristic computations (line 17). The abstraction horizon is gradually increased and these abstract policies are then explored in descending order of heuristic values and ones that have heuristic values less than the threshold are pruned (lines 23-24). Exploration in SPIDER-ABS has the same definition as in SPIDER if the policy being explored has a horizon of policy computation which is equal to the actual time horizon and if all the nodes of the policy have an action associated with them (lines 25-30). However, if those conditions are not met, then it is substituted by a group of policies that it represents (using EXTEND-POLICY () function) (lines 31-32). EXTEND-POLICY () function is also responsible for initializing the horizon and absNodes of a policy. absNodes represents the number of nodes at the last level in the policy tree, that do not have an action assigned to them. If .7 ri.absNodes = | Ωi | πi.horizon − 1 (i.e. total number of policy nodes possible at .7 ri.horizon), then .7 ri.absNodes is set to zero and .7 ri.horizon is increased by 1. Otherwise, .7 ri.absNodes is increased by 1. Thus, this function combines both HBA and NBA by using the policy variables, horizon and absNodes. Before substituting the abstract policy with a group of policies, those policies are sorted based on heuristic values (line 33). Similar type of abstraction based best response computation is adopted at leaf agents (lines 3-14). 4.4 Value ApproXimation (VAX) In this section, we present an approximate enhancement to SPIDER called VAX. The input to this technique is an approximation parameter E, which determines the difference from the optimal solution quality. This approximation parameter is used at each agent for pruning out joint policies. The pruning mechanism in SPIDER and SPIDER-Abs dictates that a joint policy be pruned only if the threshold is exactly greater than the heuristic value. However, the 11: else if v [πi, πi −] + absHeuristic> threshold then 826 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: Example of abstraction for (a) HBA (Horizon Based Abstraction) and (b) NBA (Node Based Abstraction) idea in this technique is to prune out joint a policy if the following condition is satisfied: threshold + E> ˆv [.7 ri, .7 ri -]. Apart from the pruning condition, VAX is the same as SPIDER/SPIDER-ABS. In the example of Figure 2, if the heuristic value for the second joint policy (or second search tree node) in level 2 were 238 instead of 232, then that policy could not be be pruned using SPIDER or SPIDER-Abs. However, in VAX with an approximation parameter of 5, the joint policy in consideration would also be pruned. This is because the threshold (234) at that juncture plus the approximation parameter (5), i.e. 239 would have been greater than the heuristic value for that joint policy (238). It can be noted from the example (just discussed) that this kind of pruning can lead to fewer explorations and hence lead to an improvement in the overall run-time performance. However, this can entail a sacrifice in the quality of the solution because this technique can prune out a candidate optimal solution. A bound on the error introduced by this approximate algorithm as a function of E, is provided by Proposition 3. 4.5 Percentage ApproXimation (PAX) In this section, we present the second approximation enhancement over SPIDER called PAX. Input to this technique is a parameter, δ that represents the minimum percentage of the optimal solution quality that is desired. Output of this technique is a policy with an expected value that is at least δ% of the optimal solution quality. A policy is pruned if the following condition is satisfied: threshold> δ100ˆv [.7 ri, .7 ri -]. Like in VAX, the only difference between PAX and SPIDER/SPIDER-ABS is this pruning condition. Again in Figure 2, if the heuristic value for the second search tree node in level 2 were 238 instead of 232, then PAX with an input parameter of 98% would be able to prune that search tree node (since 98 100 * 238 <234). This type of pruning leads to fewer explorations and hence an improvement in run-time performance, while potentially leading to a loss in quality of the solution. Proposition 4 provides the bound on quality loss. 4.6 Theoretical Results PROPOSITION 1. Heuristic provided using the centralized MDP heuristic is admissible. Proof. For the value provided by the heuristic to be admissible, it should be an over estimate of the expected value for a joint policy. Thus, we need to show that: For l E Ei + U Ei -: ˆvtl> vtl (refer to notation in Section 4.2) We use mathematical induction on t to prove this. Base case: t = T--1. Irrespective of whether l E Ei - or l E Ei +, ˆrtl is computed by maximizing over all actions of the agents in Tree (i), while rtl is computed for fixed policies of the same agents. Hence, ˆrtl> rtl and also ˆvtl> vtl. Assumption: Proposition holds for t = 77, where 1 <77 <T--1. We now have to prove that the proposition holds for t = 77--1. We show the proof for l E Ei - and similar reasoning can be adopted to prove for l E Ei +. The heuristic value function for l E Ei - is provided by the following equation: Proof. SPIDER examines all possible joint policies given the interaction structure of the agents. The only exception being when a joint policy is pruned based on the heuristic value. Thus, as long as a candidate optimal policy is not pruned, SPIDER will return an optimal policy. As proved in Proposition 1, the expected value for a joint policy is always an upper bound. Hence when a joint policy is pruned, it cannot be an optimal solution. PROPOSITION 3. Error bound on the solution quality for VAX (implemented over SPIDER-ABS) with an approximation parameter of E is ρE, where ρ is the number of leaf nodes in the DFS tree. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 827 Proof. We prove this proposition using mathematical induction on the depth of the DFS tree. Base case: depth = 1 (i.e. one node). Best response is computed by iterating through all policies, Ilk. A policy, πk is pruned if ˆv [πk, πk--] <threshold + ~. Thus the best response policy computed by VAX would be at most ~ away from the optimal best response. Hence the proposition holds for the base case. Assumption: Proposition holds for d, where 1 ≤ depth ≤ d. We now have to prove that the proposition holds for d + 1. Without loss of generality, lets assume that the root node of this tree has k children. Each of this children is of depth ≤ d, and hence from the assumption, the error introduced in kth child is ρk ~, where ρk is the number of leaf nodes in kth child of the root. Therefore, ρ = Ek ρk, where ρ is the number of leaf nodes in the tree. E In SPIDER-ABS, threshold at the root agent, thresspider = k v [πk +, πk--]. However, with VAX the threshold at the root agent will be (in the worst case), threshvax = Ek v [πk +, πk--] − E k ρk ~. Hence, with VAX a joint policy is pruned at the root agent if ˆv [πroot, πroot--] <threshvax + ~ ⇒ ˆv [πroot, πroot--] <threshspider − ((Ek ρk) − 1) ~ ≤ threshspider − (Ek ρk) ~ ≤ threshspider − ρ ~. Hence proved. ■ PROPOSITION 4. For PAX (implemented over SPIDER-ABS) with an input parameter of δ, the solution quality is at least δ 100 v [πroot +, *], where v [πroot +, *] denotes the optimal solution quality. Proof. We prove this proposition using mathematical induction on the depth of the DFS tree. Base case: depth = 1 (i.e. one node). Best response is computed by iterating through all policies, Ilk. A policy, πk is pruned if δ 100 ˆv [πk, πk--] <threshold. Thus the best response policy computed by PAX would be at least δ 100 times the optimal best response. Hence the proposition holds for the base case. Assumption: Proposition holds for d, where 1 ≤ depth ≤ d. We now have to prove that the proposition holds for d + 1. Without loss of generality, lets assume that the root node of this tree has k children. Each of this children is of depth ≤ d, and hence from the assumption, the solution quality in the kth child is at least δ100v [πk +, *, πk--] for PAX. With SPIDER-ABS, a joint policy is pruned at the root agent if ˆv [πroot, πroot--] <Ek v [πk +, *, πk--]. However with PAX, a joint policy is pruned if k v [πk +, *, πk--]. Since the pruning condition at the root agent in PAX is the same as the one in SPIDER-ABS, there is no error introduced at the root agent and all the error is introduced in the children. Thus, overall solution quality is at least δ 100 of the optimal solution. Hence proved. ■ 5. EXPERIMENTAL RESULTS All our experiments were conducted on the sensor network domain from Section 2. The five network configurations employed are shown in Figure 4. Algorithms that we experimented with are GOA, SPIDER, SPIDER-ABS, PAX and VAX. We compare against GOA because it is the only global optimal algorithm that considers more than two agents. We performed two sets of experiments: (i) firstly, we compared the run-time performance of the above algorithms and (ii) secondly, we experimented with PAX and VAX to study the tradeoff between run-time and solution quality. Experiments were terminated after 10000 seconds1. Figure 5 (a) provides run-time comparisons between the optimal algorithms GOA, SPIDER, SPIDER-Abs and the approximate algorithms, PAX (~ of 30) and VAX (δ of 80). X-axis denotes the sensor network configuration used, while Y-axis indicates the runtime (on a log-scale). The time horizon of policy computation was 3. For each configuration (3-chain, 4-chain, 4-star and 5-star), there are five bars indicating the time taken by GOA, SPIDER, SPIDERAbs, PAX and VAX. GOA did not terminate within the time limit for 4-star and 5-star configurations. SPIDER-Abs dominated the SPIDER and GOA for all the configurations. For instance, in the 3chain configuration, SPIDER-ABS provides 230-fold speedup over GOA and 2-fold speedup over SPIDER and for the 4-chain configuration it provides 58-fold speedup over GOA and 2-fold speedup over SPIDER. The two approximation approaches, VAX and PAX provided further improvement in performance over SPIDER-Abs. For instance, in the 5-star configuration VAX provides a 15-fold speedup and PAX provides a 8-fold speedup over SPIDER-Abs. Figures 5 (b) provides a comparison of the solution quality obtained using the different algorithms for the problems tested in Figure 5 (a). X-axis denotes the sensor network configuration while Y-axis indicates the solution quality. Since GOA, SPIDER, and SPIDER-Abs are all global optimal algorithms, the solution quality is the same for all those algorithms. For 5-P configuration, the global optimal algorithms did not terminate within the limit of 10000 seconds, so the bar for optimal quality indicates an upper bound on the optimal solution quality. With both the approximations, we obtained a solution quality that was close to the optimal solution quality. In 3-chain and 4-star configurations, it is remarkable that both PAX and VAX obtained almost the same actual quality as the global optimal algorithms, despite the approximation parameter ~ and δ. For other configurations as well, the loss in quality was less than 20% of the optimal solution quality. Figure 5 (c) provides the time to solution with PAX (for varying epsilons). X-axis denotes the approximation parameter, δ (percentage to optimal) used, while Y-axis denotes the time taken to compute the solution (on a log-scale). The time horizon for all the configurations was 4. As δ was decreased from 70 to 30, the time to solution decreased drastically. For instance, in the 3-chain case there was a total speedup of 170-fold when the δ was changed from 70 to 30. Interestingly, even with a low δ of 30%, the actual solution quality remained equal to the one obtained at 70%. Figure 5 (d) provides the time to solution for all the configurations with VAX (for varying epsilons). X-axis denotes the approximation parameter, ~ used, while Y-axis denotes the time taken to compute the solution (on a log-scale). The time horizon for all the configurations was 4. As ~ was increased, the time to solution decreased drastically. For instance, in the 4-star case there was a total speedup of 73-fold when the ~ was changed from 60 to 140. Again, the actual solution quality did not change with varying epsilon. Figure 4: Sensor network configurations Figure 5: Comparison of GOA, SPIDER, SPIDER-Abs and VAX for T = 3 on (a) Runtime and (b) Solution quality; (c) Time to solution for PAX with varying percentage to optimal for T = 4 (d) Time to solution for VAX with varying epsilon for T = 4 6. SUMMARY AND RELATED WORK This paper presents four algorithms SPIDER, SPIDER-ABS, PAX and VAX that provide a novel combination of features for policy search in distributed POMDPs: (i) exploiting agent interaction structure given a network of agents (i.e. easier scale-up to larger number of agents); (ii) using branch and bound search with an MDP based heuristic function; (iii) utilizing abstraction to improve runtime performance without sacrificing solution quality; (iv) providing a priori percentage bounds on quality of solutions using PAX; and (v) providing expected value bounds on the quality of solutions using VAX. These features allow for systematic tradeoff of solution quality for run-time in networks of agents operating under uncertainty. Experimental results show orders of magnitude improvement in performance over previous global optimal algorithms. Researchers have typically employed two types of techniques for solving distributed POMDPs. The first set of techniques compute global optimal solutions. Hansen et al. [5] present an algorithm based on dynamic programming and iterated elimination of dominant policies, that provides optimal solutions for distributed POMDPs. Szer et al. [13] provide an optimal heuristic search method for solving Decentralized POMDPs. This algorithm is based on the combination of a classical heuristic search algorithm, A ∗ and decentralized control theory. The key differences between SPIDER and MAA * are: (a) Enhancements to SPIDER (VAX and PAX) provide for quality guaranteed approximations, while MAA * is a global optimal algorithm and hence involves significant computational complexity; (b) Due to MAA *'s inability to exploit interaction structure, it was illustrated only with two agents. However, SPIDER has been illustrated for networks of agents; and (c) SPIDER explores the joint policy one agent at a time, while MAA * expands it one time step at a time (simultaneously for all the agents). The second set of techniques seek approximate policies. EmeryMontemerlo et al. [4] approximate POSGs as a series of one-step Bayesian games using heuristics to approximate future value, trading off limited lookahead for computational efficiency, resulting in locally optimal policies (with respect to the selected heuristic). Nair et al. [9]'s JESP algorithm uses dynamic programming to reach a local optimum solution for finite horizon decentralized POMDPs. Peshkin et al. [11] and Bernstein et al. [2] are examples of policy search techniques that search for locally optimal policies. Though all the above techniques improve the efficiency of policy computation considerably, they are unable to provide error bounds on the quality of the solution. This aspect of quality bounds differentiates SPIDER from all the above techniques. Acknowledgements. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division under Contract No. NBCHD030010. The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government.
Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies ABSTRACT Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions. Though this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality. A second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost. This paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time. Experimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms. 1. INTRODUCTION Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are emerging as a popular approach for modeling sequential decision making in teams operating under uncertainty [9, 4, 1, 2, 13]. The uncertainty arises on account of non determinism in the outcomes of actions and because the world state may only be partially (or incorrectly) observable. Unfortunately, as shown by Bernstein et al. [3], the problem of finding the optimal joint policy for general distributed POMDPs is NEXP-Complete. Researchers have attempted two different types of approaches towards solving these models. The first category consists of highly efficient approximate techniques, that may not reach globally optimal solutions [2, 9, 11]. The key problem with these techniques has been their inability to provide any guarantees on the quality of the solution. In contrast, the second less popular category of approaches has focused on a global optimal result [13, 5, 10]. Though these approaches obtain optimal solutions, they typically consider only two agents. Furthermore, they fail to exploit structure in the interactions of the agents and hence are severely hampered with respect to scalability when considering more than two agents. To address these problems with the existing approaches, we propose approximate techniques that provide guarantees on the quality of the solution while focussing on a network of more than two agents. We first propose the basic SPIDER (Search for Policies In Distributed EnviRonments) algorithm. There are two key novel features in SPIDER: (i) it is a branch and bound heuristic search technique that uses a MDP-based heuristic function to search for an optimal joint policy; (ii) it exploits network structure of agents by organizing agents into a Depth First Search (DFS) pseudo tree and takes advantage of the independence in the different branches of the DFS tree. We then provide three enhancements to improve the efficiency of the basic SPIDER algorithm while providing guarantees on the quality of the solution. The first enhancement uses abstractions for speedup, but does not sacrifice solution quality. In particular, it initially performs branch and bound search on abstract policies and then extends to complete policies. The second enhancement obtains speedups by sacrificing solution quality, but within an input parameter that provides the tolerable expected value difference from the optimal solution. The third enhancement is again based on bounding the search for efficiency, however with a tolerance parameter that is provided as a percentage of optimal. We experimented with the sensor network domain presented in Nair et al. [10], a domain representative of an important class of problems with networks of agents working in uncertain environments. In our experiments, we illustrate that SPIDER dominates an existing global optimal approach called GOA [10], the only known global optimal algorithm with demonstrated experimental results for more than two agents. Furthermore, we demonstrate that abstraction improves the performance of SPIDER significantly (while providing optimal solutions). We finally demonstrate a key feature of SPIDER: by utilizing the approximation enhancements it enables principled tradeoffs in run-time versus solution quality. 2. DOMAIN: DISTRIBUTED SENSOR NETS 3. BACKGROUND 3.1 Model: Network Distributed POMDP The ND-POMDP model was introduced in [10], motivated by domains such as the sensor networks introduced in Section 2. It is defined as the tuple (S, A, P, Ω, O, R, b), where S = x1 <i <nSi x Su is the set of world states. Si refers to the set of local states of agent i and Su is the set of unaffectable states. Unaffectable state refers to that part of the world state that cannot be affected by the agents' actions, e.g. environmental factors like target locations that no agent can control. A = x1 <i <nAi is the set of joint actions, where Ai is the set of action for agent i. ND-POMDP assumes transition independence, where the transition function is defined as P (s, a, s;-RRB- = Pu (su, s; u) • rI1 <i <n Pi (si, su, ai, s; i), where a = (a1,..., an) is the joint action performed in state s = (s1,..., sn, su) and s; = (s; 1,..., s; n, s; u) is the resulting state. Ω = x1 <i <nΩi is the set of joint observations where Ωi is the set of observations for agents i. Observational independence is assumed in ND-POMDPs i.e., the joint observation function is defined as O (s, a, ω) = rI1 <i <n Oi (si, su, ai, ωi), where s = (s1,..., sn, su) is the world state that results from the agents performing a = (a1,..., an) in the previous state, and ω =-LRB- ω1,..., ωn) E Ω is the observation received in state s. This implies that each agent's observation depends only on the unaffectable state, its local action and on its resulting local state. The reward function, R, is defined as l Rl (sl1,..., slr, su, (al1,..., alr)), where each l could refer to any sub-group of agents and r = IlI. Based on the reward function, an interaction hypergraph is constructed. A hyper-link, l, exists between a subset of agents for all Rl that comprise R. The interaction hypergraph is defined as G = (Ag, E), where the agents, Ag, are the vertices and E = 1lIl C Ag n Rl is a component of R} are the edges. defined as b (s) = bu (su) • H The initial belief state (distribution over the initial state), b, is 1 <i <n bi (si), where bu and bi refer to the distribution over initial unaffectable state and agent i's initial belief state, respectively. The goal in ND-POMDP is to compute the joint policy π = (π1,..., πn) that maximizes team's expected reward over a finite horizon T starting from the belief state b. An ND-POMDP is similar to an n-ary Distributed Constraint Optimization Problem (DCOP) [8, 12] where the variable at each node represents the policy selected by an individual agent, πi with the domain of the variable being the set of all local policies, Πi. The reward component Rl where IlI = 1 can be thought of as a local constraint while the reward component Rl where l> 1 corresponds to a non-local constraint in the constraint graph. 3.2 Algorithm: Global Optimal Algorithm (GOA) In previous work, GOA has been defined as a global optimal algorithm for ND-POMDPs [10]. We will use GOA in our experimental comparisons, since GOA is a state-of-the-art global optimal algorithm, and in fact the only one with experimental results available for networks of more than two agents. GOA borrows from a global optimal DCOP algorithm called DPOP [12]. GOA's message passing follows that of DPOP. The first phase is the UTIL propagation, where the utility messages, in this case values of policies, are passed up from the leaves to the root. Value for a policy at an agent is defined as the sum of best response values from its children and the joint policy reward associated with the parent policy. Thus, given a policy for a parent node, GOA requires an agent to iterate through all its policies, finding the best response policy and returning the value to the parent--while at the parent node, to find the best policy, an agent requires its children to return their best responses to each of its policies. This UTIL propagation process is repeated at each level in the tree, until the root exhausts all its policies. In the second phase of VALUE propagation, where the optimal policies are passed down from the root till the leaves. GOA takes advantage of the local interactions in the interaction graph, by pruning out unnecessary joint policy evaluations (associated with nodes not connected directly in the tree). Since the interaction graph captures all the reward interactions among agents and as this algorithm iterates through all the relevant joint policy evaluations, this algorithm yields a globally optimal solution. 4. SPIDER The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 823 4.1 Outline of SPIDER 824 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2 MDP based heuristic function The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 825 4.3 Abstraction 4.4 Value ApproXimation (VAX) 826 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.5 Percentage ApproXimation (PAX) 4.6 Theoretical Results The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 827 5. EXPERIMENTAL RESULTS 6. SUMMARY AND RELATED WORK This paper presents four algorithms SPIDER, SPIDER-ABS, PAX and VAX that provide a novel combination of features for policy search in distributed POMDPs: (i) exploiting agent interaction structure given a network of agents (i.e. easier scale-up to larger number of agents); (ii) using branch and bound search with an MDP based heuristic function; (iii) utilizing abstraction to improve runtime performance without sacrificing solution quality; (iv) providing a priori percentage bounds on quality of solutions using PAX; and (v) providing expected value bounds on the quality of solutions using VAX. These features allow for systematic tradeoff of solution quality for run-time in networks of agents operating under uncertainty. Experimental results show orders of magnitude improvement in performance over previous global optimal algorithms. Researchers have typically employed two types of techniques for solving distributed POMDPs. The first set of techniques compute global optimal solutions. Hansen et al. [5] present an algorithm based on dynamic programming and iterated elimination of dominant policies, that provides optimal solutions for distributed POMDPs. Szer et al. [13] provide an optimal heuristic search method for solving Decentralized POMDPs. This algorithm is based on the combination of a classical heuristic search algorithm, A ∗ and decentralized control theory. The key differences between SPIDER and MAA * are: (a) Enhancements to SPIDER (VAX and PAX) provide for quality guaranteed approximations, while MAA * is a global optimal algorithm and hence involves significant computational complexity; (b) Due to MAA *'s inability to exploit interaction structure, it was illustrated only with two agents. However, SPIDER has been illustrated for networks of agents; and (c) SPIDER explores the joint policy one agent at a time, while MAA * expands it one time step at a time (simultaneously for all the agents). The second set of techniques seek approximate policies. EmeryMontemerlo et al. [4] approximate POSGs as a series of one-step Bayesian games using heuristics to approximate future value, trading off limited lookahead for computational efficiency, resulting in locally optimal policies (with respect to the selected heuristic). Nair et al. [9]'s JESP algorithm uses dynamic programming to reach a local optimum solution for finite horizon decentralized POMDPs. Peshkin et al. [11] and Bernstein et al. [2] are examples of policy search techniques that search for locally optimal policies. Though all the above techniques improve the efficiency of policy computation considerably, they are unable to provide error bounds on the quality of the solution. This aspect of quality bounds differentiates SPIDER from all the above techniques. Acknowledgements. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division under Contract No. NBCHD030010. The views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government.
Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies ABSTRACT Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions. Though this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality. A second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost. This paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time. Experimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms. 1. INTRODUCTION Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are emerging as a popular approach for modeling sequential decision making in teams operating under uncertainty [9, 4, 1, 2, 13]. The uncertainty arises on account of non determinism in the outcomes of actions and because the world state may only be partially (or incorrectly) observable. Unfortunately, as shown by Bernstein et al. [3], the problem of finding the optimal joint policy for general distributed POMDPs is NEXP-Complete. Researchers have attempted two different types of approaches towards solving these models. The first category consists of highly efficient approximate techniques, that may not reach globally optimal solutions [2, 9, 11]. The key problem with these techniques has been their inability to provide any guarantees on the quality of the solution. In contrast, the second less popular category of approaches has focused on a global optimal result [13, 5, 10]. Though these approaches obtain optimal solutions, they typically consider only two agents. Furthermore, they fail to exploit structure in the interactions of the agents and hence are severely hampered with respect to scalability when considering more than two agents. To address these problems with the existing approaches, we propose approximate techniques that provide guarantees on the quality of the solution while focussing on a network of more than two agents. We first propose the basic SPIDER (Search for Policies In Distributed EnviRonments) algorithm. We then provide three enhancements to improve the efficiency of the basic SPIDER algorithm while providing guarantees on the quality of the solution. The first enhancement uses abstractions for speedup, but does not sacrifice solution quality. In particular, it initially performs branch and bound search on abstract policies and then extends to complete policies. The second enhancement obtains speedups by sacrificing solution quality, but within an input parameter that provides the tolerable expected value difference from the optimal solution. The third enhancement is again based on bounding the search for efficiency, however with a tolerance parameter that is provided as a percentage of optimal. We experimented with the sensor network domain presented in Nair et al. [10], a domain representative of an important class of problems with networks of agents working in uncertain environments. In our experiments, we illustrate that SPIDER dominates an existing global optimal approach called GOA [10], the only known global optimal algorithm with demonstrated experimental results for more than two agents. Furthermore, we demonstrate that abstraction improves the performance of SPIDER significantly (while providing optimal solutions). We finally demonstrate a key feature of SPIDER: by utilizing the approximation enhancements it enables principled tradeoffs in run-time versus solution quality. 3. BACKGROUND 3.1 Model: Network Distributed POMDP The ND-POMDP model was introduced in [10], motivated by domains such as the sensor networks introduced in Section 2. Si refers to the set of local states of agent i and Su is the set of unaffectable states. Unaffectable state refers to that part of the world state that cannot be affected by the agents' actions, e.g. environmental factors like target locations that no agent can control. This implies that each agent's observation depends only on the unaffectable state, its local action and on its resulting local state. The reward function, R, is defined as l Rl (sl1,..., slr, su, (al1,..., alr)), where each l could refer to any sub-group of agents and r = IlI. Based on the reward function, an interaction hypergraph is constructed. A hyper-link, l, exists between a subset of agents for all Rl that comprise R. The interaction hypergraph is defined as G = (Ag, E), where the agents, Ag, are the vertices and E = 1lIl C Ag n Rl is a component of R} are the edges. The goal in ND-POMDP is to compute the joint policy π = (π1,..., πn) that maximizes team's expected reward over a finite horizon T starting from the belief state b. An ND-POMDP is similar to an n-ary Distributed Constraint Optimization Problem (DCOP) [8, 12] where the variable at each node represents the policy selected by an individual agent, πi with the domain of the variable being the set of all local policies, Πi. 3.2 Algorithm: Global Optimal Algorithm (GOA) In previous work, GOA has been defined as a global optimal algorithm for ND-POMDPs [10]. We will use GOA in our experimental comparisons, since GOA is a state-of-the-art global optimal algorithm, and in fact the only one with experimental results available for networks of more than two agents. GOA borrows from a global optimal DCOP algorithm called DPOP [12]. GOA's message passing follows that of DPOP. The first phase is the UTIL propagation, where the utility messages, in this case values of policies, are passed up from the leaves to the root. Value for a policy at an agent is defined as the sum of best response values from its children and the joint policy reward associated with the parent policy. This UTIL propagation process is repeated at each level in the tree, until the root exhausts all its policies. In the second phase of VALUE propagation, where the optimal policies are passed down from the root till the leaves. GOA takes advantage of the local interactions in the interaction graph, by pruning out unnecessary joint policy evaluations (associated with nodes not connected directly in the tree). Since the interaction graph captures all the reward interactions among agents and as this algorithm iterates through all the relevant joint policy evaluations, this algorithm yields a globally optimal solution. 6. SUMMARY AND RELATED WORK These features allow for systematic tradeoff of solution quality for run-time in networks of agents operating under uncertainty. Experimental results show orders of magnitude improvement in performance over previous global optimal algorithms. Researchers have typically employed two types of techniques for solving distributed POMDPs. The first set of techniques compute global optimal solutions. Hansen et al. [5] present an algorithm based on dynamic programming and iterated elimination of dominant policies, that provides optimal solutions for distributed POMDPs. Szer et al. [13] provide an optimal heuristic search method for solving Decentralized POMDPs. This algorithm is based on the combination of a classical heuristic search algorithm, A ∗ and decentralized control theory. The key differences between SPIDER and MAA * are: (a) Enhancements to SPIDER (VAX and PAX) provide for quality guaranteed approximations, while MAA * is a global optimal algorithm and hence involves significant computational complexity; (b) Due to MAA *'s inability to exploit interaction structure, it was illustrated only with two agents. However, SPIDER has been illustrated for networks of agents; and (c) SPIDER explores the joint policy one agent at a time, while MAA * expands it one time step at a time (simultaneously for all the agents). The second set of techniques seek approximate policies. Nair et al. [9]'s JESP algorithm uses dynamic programming to reach a local optimum solution for finite horizon decentralized POMDPs. Peshkin et al. [11] and Bernstein et al. [2] are examples of policy search techniques that search for locally optimal policies. Though all the above techniques improve the efficiency of policy computation considerably, they are unable to provide error bounds on the quality of the solution. This aspect of quality bounds differentiates SPIDER from all the above techniques. Acknowledgements. NBCHD030010.
I-72
Learning Consumer Preferences Using Semantic Similarity
In online, dynamic environments, the services requested by consumers may not be readily served by the providers. This requires the service consumers and providers to negotiate their service needs and offers. Multiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price. In contrast, this work develops an approach through which the parties can negotiate the content of a service. This calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other's preferences incrementally over time. Accordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service. Through repetitive interactions, the provider learns consumers' needs accurately and can make better targeted offers. To enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques. We further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics.
[ "consum prefer", "semant similar", "servic", "negoti", "price", "ontolog", "similar metric", "consum agent", "data repositori", "prefer learn", "candid elimin algorithm", "decis tree", "increment decis tree", "disjunct cea", "multipl version space", "disjunct hypothesi", "id3", "learn set", "rp similar", "induct learn" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "U", "R", "U", "U", "M", "U", "M", "U", "U", "M", "M", "M" ]
Learning Consumer Preferences Using Semantic Similarity ∗ Reyhan Aydo˘gan reyhan.aydogan@gmail.com Pınar Yolum pinar.yolum@boun.edu.tr Department of Computer Engineering Bo˘gaziçi University Bebek, 34342, Istanbul,Turkey ABSTRACT In online, dynamic environments, the services requested by consumers may not be readily served by the providers. This requires the service consumers and providers to negotiate their service needs and offers. Multiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price. In contrast, this work develops an approach through which the parties can negotiate the content of a service. This calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other``s preferences incrementally over time. Accordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service. Through repetitive interactions, the provider learns consumers'' needs accurately and can make better targeted offers. To enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques. We further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Algorithms, Experimentation 1. INTRODUCTION Current approaches to e-commerce treat service price as the primary construct for negotiation by assuming that the service content is fixed [9]. However, negotiation on price presupposes that other properties of the service have already been agreed upon. Nevertheless, many times the service provider may not be offering the exact requested service due to lack of resources, constraints in its business policy, and so on [3]. When this is the case, the producer and the consumer need to negotiate the content of the requested service [15]. However, most existing negotiation approaches assume that all features of a service are equally important and concentrate on the price [5, 2]. However, in reality not all features may be relevant and the relevance of a feature may vary from consumer to consumer. For instance, completion time of a service may be important for one consumer whereas the quality of the service may be more important for a second consumer. Without doubt, considering the preferences of the consumer has a positive impact on the negotiation process. For this purpose, evaluation of the service components with different weights can be useful. Some studies take these weights as a priori and uses the fixed weights [4]. On the other hand, mostly the producer does not know the consumer``s preferences before the negotiation. Hence, it is more appropriate for the producer to learn these preferences for each consumer. Preference Learning: As an alternative, we propose an architecture in which the service providers learn the relevant features of a service for a particular customer over time. We represent service requests as a vector of service features. We use an ontology in order to capture the relations between services and to construct the features for a given service. By using a common ontology, we enable the consumers and producers to share a common vocabulary for negotiation. The particular service we have used is a wine selling service. The wine seller learns the wine preferences of the customer to sell better targeted wines. The producer models the requests of the consumer and its counter offers to learn which features are more important for the consumer. Since no information is present before the interactions start, the learning algorithm has to be incremental so that it can be trained at run time and can revise itself with each new interaction. Service Generation: Even after the producer learns the important features for a consumer, it needs a method to generate offers that are the most relevant for the consumer among its set of possible services. In other words, the question is how the producer uses the information that was learned from the dialogues to make the best offer to the consumer. For instance, assume that the producer has learned that the consumer wants to buy a red wine but the producer can only offer rose or white wine. What should the producer``s offer 1301 978-81-904262-7-5 (RPS) c 2007 IFAAMAS contain; white wine or rose wine? If the producer has some domain knowledge about semantic similarity (e.g., knows that the red and rose wines are taste-wise more similar than white wine), then it can generate better offers. However, in addition to domain knowledge, this derivation requires appropriate metrics to measure similarity between available services and learned preferences. The rest of this paper is organized as follows: Section 2 explains our proposed architecture. Section 3 explains the learning algorithms that were studied to learn consumer preferences. Section 4 studies the different service offering mechanisms. Section 5 contains the similarity metrics used in the experiments. The details of the developed system is analyzed in Section 6. Section 7 provides our experimental setup, test cases, and results. Finally, Section 8 discusses and compares our work with other related work. 2. ARCHITECTURE Our main components are consumer and producer agents, which communicate with each other to perform content-oriented negotiation. Figure 1 depicts our architecture. The consumer agent represents the customer and hence has access to the preferences of the customer. The consumer agent generates requests in accordance with these preferences and negotiates with the producer based on these preferences. Similarly, the producer agent has access to the producer``s inventory and knows which wines are available or not. A shared ontology provides the necessary vocabulary and hence enables a common language for agents. This ontology describes the content of the service. Further, since an ontology can represent concepts, their properties and their relationships semantically, the agents can reason the details of the service that is being negotiated. Since a service can be anything such as selling a car, reserving a hotel room, and so on, the architecture is independent of the ontology used. However, to make our discussion concrete, we use the well-known Wine ontology [19] with some modification to illustrate our ideas and to test our system. The wine ontology describes different types of wine and includes features such as color, body, winery of the wine and so on. With this ontology, the service that is being negotiated between the consumer and the producer is that of selling wine. The data repository in Figure 1 is used solely by the producer agent and holds the inventory information of the producer. The data repository includes information on the products the producer owns, the number of the products and ratings of those products. Ratings indicate the popularity of the products among customers. Those are used to decide which product will be offered when there exists more than one product having same similarity to the request of the consumer agent. The negotiation takes place in a turn-taking fashion, where the consumer agent starts the negotiation with a particular service request. The request is composed of significant features of the service. In the wine example, these features include color, winery and so on. This is the particular wine that the customer is interested in purchasing. If the producer has the requested wine in its inventory, the producer offers the wine and the negotiation ends. Otherwise, the producer offers an alternative wine from the inventory. When the consumer receives a counter offer from the producer, it will evaluate it. If it is acceptable, then the negotiation will end. Otherwise, the customer will generate a new request or stick to the previous request. This process will continue until some service is accepted by the consumer agent or all possible offers are put forward to the consumer by the producer. One of the crucial challenges of the content-oriented negotiation is the automatic generation of counter offers by the service producer. When the producer constructs its offer, it should consider Figure 1: Proposed Negotiation Architecture three important things: the current request, consumer preferences and the producer``s available services. Both the consumer``s current request and the producer``s own available services are accessible by the producer. However, the consumer``s preferences in most cases will not be available. Hence, the producer will have to understand the needs of the consumer from their interactions and generate a counter offer that is likely to be accepted by the consumer. This challenge can be studied in three stages: • Preference Learning: How can the producers learn about each customer``s preferences based on requests and counter offers? (Section 3) • Service Offering: How can the producers revise their offers based on the consumer``s preferences that they have learned so far? (Section 4) • Similarity Estimation: How can the producer agent estimate similarity between the request and available services? (Section 5) 3. PREFERENCE LEARNING The requests of the consumer and the counter offers of the producer are represented as vectors, where each element in the vector corresponds to the value of a feature. The requests of the consumers represent individual wine products whereas their preferences are constraints over service features. For example, a consumer may have preference for red wine. This means that the consumer is willing to accept any wine offered by the producers as long as the color is red. Accordingly, the consumer generates a request where the color feature is set to red and other features are set to arbitrary values, e.g. (Medium, Strong, Red). At the beginning of negotiation, the producer agent does not know the consumer``s preferences but will need to learn them using information obtained from the dialogues between the producer and the consumer. The preferences denote the relative importance of the features of the services demanded by the consumer agents. For instance, the color of the wine may be important so the consumer insists on buying the wine whose color is red and rejects all 1302 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: How DCEA works Type Sample The most The most general set specific set + (Full,Strong,White) {(? , ? , ?)} {(Full,Strong,White)} {{(? -Full), ? , ? } , - (Full,Delicate,Rose) {? , (? -Delicate), ?} , {(Full,Strong,White)} {? , ? , (? -Rose)}} {{(? -Full), ? , ?} , {{(Full,Strong,White)}, + (Medium,Moderate,Red) {? ,(? -Delicate), ?} , {(Medium,Moderate,Red)}} {? , ? , (? -Rose)}} the offers involving the wine whose color is white or rose. On the contrary, the winery may not be as important as the color for this customer, so the consumer may have a tendency to accept wines from any winery as long as the color is red. To tackle this problem, we propose to use incremental learning algorithms [6]. This is necessary since no training data is available before the interactions start. We particularly investigate two approaches. The first one is inductive learning. This technique is applied to learn the preferences as concepts. We elaborate on Candidate Elimination Algorithm (CEA) for Version Space [10]. CEA is known to perform poorly if the information to be learned is disjunctive. Interestingly, most of the time consumer preferences are disjunctive. Say, we are considering an agent that is buying wine. The consumer may prefer red wine or rose wine but not white wine. To use CEA with such preferences, a solid modification is necessary. The second approach is decision trees. Decision trees can learn from examples easily and classify new instances as positive or negative. A well-known incremental decision tree is ID5R [18]. However, ID5R is known to suffer from high computational complexity. For this reason, we instead use the ID3 algorithm [13] and iteratively build decision trees to simulate incremental learning. 3.1 CEA CEA [10] is one of the inductive learning algorithms that learns concepts from observed examples. The algorithm maintains two sets to model the concept to be learned. The first set is the most general set G. G contains hypotheses about all the possible values that the concept may obtain. As the name suggests, it is a generalization and contains all possible values unless the values have been identified not to represent the concept. The second set is the most specific set S. S contains only hypotheses that are known to identify the concept that is being learned. At the beginning of the algorithm, G is initialized to cover all possible concepts while S is initialized to be empty. During the interactions, each request of the consumer can be considered as a positive example and each counter offer generated by the producer and rejected by the consumer agent can be thought of as a negative example. At each interaction between the producer and the consumer, both G and S are modified. The negative samples enforce the specialization of some hypotheses so that G does not cover any hypothesis accepting the negative samples as positive. When a positive sample comes, the most specific set S should be generalized in order to cover the new training instance. As a result, the most general hypotheses and the most special hypotheses cover all positive training samples but do not cover any negative ones. Incrementally, G specializes and S generalizes until G and S are equal to each other. When these sets are equal, the algorithm converges by means of reaching the target concept. 3.2 Disjunctive CEA Unfortunately, CEA is primarily targeted for conjunctive concepts. On the other hand, we need to learn disjunctive concepts in the negotiation of a service since consumer may have several alternative wishes. There are several studies on learning disjunctive concepts via Version Space. Some of these approaches use multiple version space. For instance, Hong et al. maintain several version spaces by split and merge operation [7]. To be able to learn disjunctive concepts, they create new version spaces by examining the consistency between G and S. We deal with the problem of not supporting disjunctive concepts of CEA by extending our hypothesis language to include disjunctive hypothesis in addition to the conjunctives and negation. Each attribute of the hypothesis has two parts: inclusive list, which holds the list of valid values for that attribute and exclusive list, which is the list of values which cannot be taken for that feature. EXAMPLE 1. Assume that the most specific set is {(Light, Delicate, Red)} and a positive example, (Light, Delicate, White) comes. The original CEA will generalize this as (Light, Delicate, ?) , meaning the color can take any value. However, in fact, we only know that the color can be red or white. In the DCEA, we generalize it as {(Light, Delicate, [White, Red] )}. Only when all the values exist in the list, they will be replaced by ? . In other words, we let the algorithm generalize more slowly than before. We modify the CEA algorithm to deal with this change. The modified algorithm, DCEA, is given as Algorithm 1. Note that compared to the previous studies of disjunctive versions, our approach uses only a single version space rather than multiple version space. The initialization phase is the same as the original algorithm (lines 1, 2). If any positive sample comes, we add the sample to the special set as before (line 4). However, we do not eliminate the hypotheses in G that do not cover this sample since G now contains a disjunction of many hypotheses, some of which will be conflicting with each other. Removing a specific hypothesis from G will result in loss of information, since other hypotheses are not guaranteed to cover it. After some time, some hypotheses in S can be merged and can construct one hypothesis (lines 6, 7). When a negative sample comes, we do not change S as before. We only modify the most general hypotheses not to cover this negative sample (lines 11-15). Different from the original CEA, we try to specialize the G minimally. The algorithm removes the hypothesis covering the negative sample (line 13). Then, we generate new hypotheses as the number of all possible attributes by using the removed hypothesis. For each attribute in the negative sample, we add one of them at each time to the exclusive list of the removed hypothesis. Thus, all possible hypotheses that do not cover the negative sample are generated (line 14). Note that, exclusive list contains the values that the attribute cannot take. For example, consider the color attribute. If a hypothesis includes red in its exclusive list and ? in its inclusive list, this means that color may take any value except red. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1303 Algorithm 1 Disjunctive Candidate Elimination Algorithm 1: G ←the set of maximally general hypotheses in H 2: S ←the set of maximally specific hypotheses in H 3: For each training example, d 4: if d is a positive example then 5: Add d to S 6: if s in S can be combined with d to make one element then 7: Combine s and d into sd {sd is the rule covers s and d} 8: end if 9: end if 10: if d is a negative example then 11: For each hypothesis g in G does cover d 12: * Assume : g = (x1, x2, ..., xn) and d = (d1, d2, ..., dn) 13: - Remove g from G 14: - Add hypotheses g1, g2, gn where g1= (x1-d1, x2,..., xn), g2= (x1, x2-d2,..., xn),..., and gn= (x1, x2,..., xn-dn) 15: - Remove from G any hypothesis that is less general than another hypothesis in G 16: end if EXAMPLE 2. Table 1 illustrates the first three interactions and the workings of DCEA. The most general set and the most specific set show the contents of G and S after the sample comes in. After the first positive sample, S is generalized to also cover the instance. The second sample is negative. Thus, we replace (? , ? , ?) by three disjunctive hypotheses; each hypothesis being minimally specialized. In this process, at each time one attribute value of negative sample is applied to the hypothesis in the general set. The third sample is positive and generalizes S even more. Note that in Table 1, we do not eliminate {(? -Full), ? , ?} from the general set while having a positive sample such as (Full, Strong, White). This stems from the possibility of using this rule in the generation of other hypotheses. For instance, if the example continues with a negative sample (Full, Strong, Red), we can specialize the previous rule such as {(? -Full), ? , (? -Red)}. By Algorithm 1, we do not miss any information. 3.3 ID3 ID3 [13] is an algorithm that constructs decision trees in a topdown fashion from the observed examples represented in a vector with attribute-value pairs. Applying this algorithm to our system with the intention of learning the consumer``s preferences is appropriate since this algorithm also supports learning disjunctive concepts in addition to conjunctive concepts. The ID3 algorithm is used in the learning process with the purpose of classification of offers. There are two classes: positive and negative. Positive means that the service description will possibly be accepted by the consumer agent whereas the negative implies that it will potentially be rejected by the consumer. Consumer``s requests are considered as positive training examples and all rejected counter-offers are thought as negative ones. The decision tree has two types of nodes: leaf node in which the class labels of the instances are held and non-leaf nodes in which test attributes are held. The test attribute in a non-leaf node is one of the attributes making up the service description. For instance, body, flavor, color and so on are potential test attributes for wine service. When we want to find whether the given service description is acceptable, we start searching from the root node by examining the value of test attributes until reaching a leaf node. The problem with this algorithm is that it is not an incremental algorithm, which means all the training examples should exist before learning. To overcome this problem, the system keeps consumer``s requests throughout the negotiation interaction as positive examples and all counter-offers rejected by the consumer as negative examples. After each coming request, the decision tree is rebuilt. Without doubt, there is a drawback of reconstruction such as additional process load. However, in practice we have evaluated ID3 to be fast and the reconstruction cost to be negligible. 4. SERVICE OFFERING After learning the consumer``s preferences, the producer needs to make a counter offer that is compatible with the consumer``s preferences. 4.1 Service Offering via CEA and DCEA To generate the best offer, the producer agent uses its service ontology and the CEA algorithm. The service offering mechanism is the same for both the original CEA and DCEA, but as explained before their methods for updating G and S are different. When producer receives a request from the consumer, the learning set of the producer is trained with this request as a positive sample. The learning components, the most specific set S and the most general set G are actively used in offering service. The most general set, G is used by the producer in order to avoid offering the services, which will be rejected by the consumer agent. In other words, it filters the service set from the undesired services, since G contains hypotheses that are consistent with the requests of the consumer. The most specific set, S is used in order to find best offer, which is similar to the consumer``s preferences. Since the most specific set S holds the previous requests and the current request, estimating similarity between this set and every service in the service list is very convenient to find the best offer from the service list. When the consumer starts the interaction with the producer agent, producer agent loads all related services to the service list object. This list constitutes the provider``s inventory of services. Upon receiving a request, if the producer can offer an exactly matching service, then it does so. For example, for a wine this corresponds to selling a wine that matches the specified features of the consumer``s request identically. When the producer cannot offer the service as requested, it tries to find the service that is most similar to the services that have been requested by the consumer during the negotiation. To do this, the producer has to compute the similarity between the services it can offer and the services that have been requested (in S). We compute the similarities in various ways as will be explained in Section 5. After the similarity of the available services with the current S is calculated, there may be more than one service with the maximum similarity. The producer agent can break the tie in a number of ways. Here, we have associated a rating value with each service and the producer prefers the higher rated service to others. 4.2 Service Offering via ID3 If the producer learns the consumer``s preferences with ID3, a similar mechanism is applied with two differences. First, since ID3 does not maintain G, the list of unaccepted services that are classified as negative are removed from the service list. Second, the similarities of possible services are not measured with respect to S, but instead to all previously made requests. 4.3 Alternative Service Offering Mechanisms In addition to these three service offering mechanisms (Service Offering with CEA, Service Offering with DCEA, and Service Offering with ID3), we include two other mechanisms. . 1304 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) • Random Service Offering (RO): The producer generates a counter offer randomly from the available service list, without considering the consumer``s preferences. • Service Offering considering only the current request (SCR): The producer selects a counter offer according to the similarity of the consumer``s current request but does not consider previous requests. 5. SIMILARITY ESTIMATION Similarity can be estimated with a similarity metric that takes two entries and returns how similar they are. There are several similarity metrics used in case based reasoning system such as weighted sum of Euclidean distance, Hamming distance and so on [12]. The similarity metric affects the performance of the system while deciding which service is the closest to the consumer``s request. We first analyze some existing metrics and then propose a new semantic similarity metric named RP Similarity. 5.1 Tversky``s Similarity Metric Tversky``s similarity metric compares two vectors in terms of the number of exactly matching features [17]. In Equation (1), common represents the number of matched attributes whereas different represents the number of the different attributes. Our current assumption is that α and β is equal to each other. SMpq = α(common) α(common) + β(different) (1) Here, when two features are compared, we assign zero for dissimilarity and one for similarity by omitting the semantic closeness among the feature values. Tversky``s similarity metric is designed to compare two feature vectors. In our system, whereas the list of services that can be offered by the producer are each a feature vector, the most specific set S is not a feature vector. S consists of hypotheses of feature vectors. Therefore, we estimate the similarity of each hypothesis inside the most specific set S and then take the average of the similarities. EXAMPLE 3. Assume that S contains the following two hypothesis: { {Light, Moderate, (Red, White)} , {Full, Strong, Rose}}. Take service s as (Light, Strong, Rose). Then the similarity of the first one is equal to 1/3 and the second one is equal to 2/3 in accordance with Equation (1). Normally, we take the average of it and obtain (1/3 + 2/3)/2, equally 1/2. However, the first hypothesis involves the effect of two requests and the second hypothesis involves only one request. As a result, we expect the effect of the first hypothesis to be greater than that of the second. Therefore, we calculate the average similarity by considering the number of samples that hypotheses cover. Let ch denote the number of samples that hypothesis h covers and (SM(h,service)) denote the similarity of hypothesis h with the given service. We compute the similarity of each hypothesis with the given service and weight them with the number of samples they cover. We find the similarity by dividing the weighted sum of the similarities of all hypotheses in S with the service by the number of all samples that are covered in S. AV G−SM(service,S) = |S| |h| (ch ∗ SM(h,service)) |S| |h| ch (2) Figure 2: Sample taxonomy for similarity estimation EXAMPLE 4. For the above example, the similarity of (Light, Strong, Rose) with the specific set is (2 ∗ 1/3 + 2/3)/3, equally 4/9. The possible number of samples that a hypothesis covers can be estimated with multiplying cardinalities of each attribute. For example, the cardinality of the first attribute is two and the others is equal to one for the given hypothesis such as {Light, Moderate, (Red, White)}. When we multiply them, we obtain two (2 ∗ 1 ∗ 1 = 2). 5.2 Lin``s Similarity Metric A taxonomy can be used while estimating semantic similarity between two concepts. Estimating semantic similarity in a Is-A taxonomy can be done by calculating the distance between the nodes related to the compared concepts. The links among the nodes can be considered as distances. Then, the length of the path between the nodes indicates how closely similar the concepts are. An alternative estimation to use information content in estimation of semantic similarity rather than edge counting method, was proposed by Lin [8]. The equation (3) [8] shows Lin``s similarity where c1 and c2 are the compared concepts and c0 is the most specific concept that subsumes both of them. Besides, P(C) represents the probability of an arbitrary selected object belongs to concept C. Similarity(c1, c2) = 2 × log P(c0) log P(c1) + log P(c2) (3) 5.3 Wu & Palmer``s Similarity Metric Different from Lin, Wu and Palmer use the distance between the nodes in IS-A taxonomy [20]. The semantic similarity is represented with Equation (4) [20]. Here, the similarity between c1 and c2 is estimated and c0 is the most specific concept subsuming these classes. N1 is the number of edges between c1 and c0. N2 is the number of edges between c2 and c0. N0 is the number of IS-A links of c0 from the root of the taxonomy. SimW u&P almer(c1, c2) = 2 × N0 N1 + N2 + 2 × N0 (4) 5.4 RP Semantic Metric We propose to estimate the relative distance in a taxonomy between two concepts using the following intuitions. We use Figure 2 to illustrate these intuitions. • Parent versus grandparent: Parent of a node is more similar to the node than grandparents of that. Generalization of The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1305 a concept reasonably results in going further away that concept. The more general concepts are, the less similar they are. For example, AnyWineColor is parent of ReddishColor and ReddishColor is parent of Red. Then, we expect the similarity between ReddishColor and Red to be higher than that of the similarity between AnyWineColor and Red. • Parent versus sibling: A node would have higher similarity to its parent than to its sibling. For instance, Red and Rose are children of ReddishColor. In this case, we expect the similarity between Red and ReddishColor to be higher than that of Red and Rose. • Sibling versus grandparent: A node is more similar to it``s sibling then to its grandparent. To illustrate, AnyWineColor is grandparent of Red, and Red and Rose are siblings. Therefore, we possibly anticipate that Red and Rose are more similar than AnyWineColor and Red. As a taxonomy is represented in a tree, that tree can be traversed from the first concept being compared through the second concept. At starting node related to the first concept, the similarity value is constant and equal to one. This value is diminished by a constant at each node being visited over the path that will reach to the node including the second concept. The shorter the path between the concepts, the higher the similarity between nodes. Algorithm 2 Estimate-RP-Similarity(c1,c2) Require: The constants should be m > n > m2 where m, n ∈ R[0, 1] 1: Similarity ← 1 2: if c1 is equal to c2 then 3: Return Similarity 4: end if 5: commonParent ← findCommonParent(c1, c2) {commonParent is the most specific concept that covers both c1 and c2} 6: N1 ← findDistance(commonParent, c1) 7: N2 ← findDistance(commonParent, c2) {N1 & N2 are the number of links between the concept and parent concept} 8: if (commonParent == c1) or (commonParent == c2) then 9: Similarity ← Similarity ∗ m(N1+N2) 10: else 11: Similarity ← Similarity ∗ n ∗ m(N1+N2−2) 12: end if 13: Return Similarity Relative distance between nodes c1 and c2 is estimated in the following way. Starting from c1, the tree is traversed to reach c2. At each hop, the similarity decreases since the concepts are getting farther away from each other. However, based on our intuitions, not all hops decrease the similarity equally. Let m represent the factor for hopping from a child to a parent and n represent the factor for hopping from a sibling to another sibling. Since hopping from a node to its grandparent counts as two parent hops, the discount factor of moving from a node to its grandparent is m2 . According to the above intuitions, our constants should be in the form m > n > m2 where the value of m and n should be between zero and one. Algorithm 2 shows the distance calculation. According to the algorithm, firstly the similarity is initialized with the value of one (line 1). If the concepts are equal to each other then, similarity will be one (lines 2-4). Otherwise, we compute the common parent of the two nodes and the distance of each concept to the common parent without considering the sibling (lines 5-7). If one of the concepts is equal to the common parent, then there is no sibling relation between the concepts. For each level, we multiply the similarity by m and do not consider the sibling factor in the similarity estimation. As a result, we decrease the similarity at each level with the rate of m (line9). Otherwise, there has to be a sibling relation. This means that we have to consider the effect of n when measuring similarity. Recall that we have counted N1+N2 edges between the concepts. Since there is a sibling relation, two of these edges constitute the sibling relation. Hence, when calculating the effect of the parent relation, we use N1+N2 −2 edges (line 11). Some similarity estimations related to the taxonomy in Figure 2 are given in Table 2. In this example, m is taken as 2/3 and n is taken as 4/7. Table 2: Sample similarity estimation over sample taxonomy Similarity(ReddishColor, Rose) = 1 ∗ (2/3) = 0.6666667 Similarity(Red, Rose) = 1 ∗ (4/7) = 0.5714286 Similarity(AnyW ineColor,Rose) = 1 ∗ (2/3)2 = 0.44444445 Similarity(W hite,Rose) = 1 ∗ (2/3) ∗ (4/7) = 0.3809524 For all semantic similarity metrics in our architecture, the taxonomy for features is held in the shared ontology. In order to evaluate the similarity of feature vector, we firstly estimate the similarity for feature one by one and take the average sum of these similarities. Then the result is equal to the average semantic similarity of the entire feature vector. 6. DEVELOPED SYSTEM We have implemented our architecture in Java. To ease testing of the system, the consumer agent has a user interface that allows us to enter various requests. The producer agent is fully automated and the learning and service offering operations work as explained before. In this section, we explain the implementation details of the developed system. We use OWL [11] as our ontology language and JENA as our ontology reasoner. The shared ontology is the modified version of the Wine Ontology [19]. It includes the description of wine as a concept and different types of wine. All participants of the negotiation use this ontology for understanding each other. According to the ontology, seven properties make up the wine concept. The consumer agent and the producer agent obtain the possible values for the these properties by querying the ontology. Thus, all possible values for the components of the wine concept such as color, body, sugar and so on can be reached by both agents. Also a variety of wine types are described in this ontology such as Burgundy, Chardonnay, CheninBlanc and so on. Intuitively, any wine type described in the ontology also represents a wine concept. This allows us to consider instances of Chardonnay wine as instances of Wine class. In addition to wine description, the hierarchical information of some features can be inferred from the ontology. For instance, we can represent the information Europe Continent covers Western Country. Western Country covers French Region, which covers some territories such as Loire, Bordeaux and so on. This hierarchical information is used in estimation of semantic similarity. In this part, some reasoning can be made such as if a concept X covers Y and Y covers Z, then concept X covers Z. For example, Europe Continent covers Bordeaux. 1306 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) For some features such as body, flavor and sugar, there is no hierarchical information, but their values are semantically leveled. When that is the case, we give the reasonable similarity values for these features. For example, the body can be light, medium, or strong. In this case, we assume that light is 0.66 similar to medium but only 0.33 to strong. WineStock Ontology is the producer``s inventory and describes a product class as WineProduct. This class is necessary for the producer to record the wines that it sells. Ontology involves the individuals of this class. The individuals represent available services that the producer owns. We have prepared two separate WineStock ontologies for testing. In the first ontology, there are 19 available wine products and in the second ontology, there are 50 products. 7. PERFORMANCE EVALUATION We evaluate the performance of the proposed systems in respect to learning technique they used, DCEA and ID3, by comparing them with the CEA, RO (for random offering), and SCR (offering based on current request only). We apply a variety of scenarios on this dataset in order to see the performance differences. Each test scenario contains a list of preferences for the user and number of matches from the product list. Table 3 shows these preferences and availability of those products in the inventory for first five scenarios. Note that these preferences are internal to the consumer and the producer tries to learn these during negotiation. Table 3: Availability of wines in different test scenarios ID Preference of consumer Availability (out of 19) 1 Dry wine 15 2 Red and dry wine 8 3 Red, dry and moderate wine 4 4 Red and strong wine 2 5 Red or rose, and strong 3 7.1 Comparison of Learning Algorithms In comparison of learning algorithms, we use the five scenarios in Table 3. Here, first we use Tversky``s similarity measure. With these test cases, we are interested in finding the number of iterations that are required for the producer to generate an acceptable offer for the consumer. Since the performance also depends on the initial request, we repeat our experiments with different initial requests. Consequently, for each case, we run the algorithms five times with several variations of the initial requests. In each experiment, we count the number of iterations that were needed to reach an agreement. We take the average of these numbers in order to evaluate these systems fairly. As is customary, we test each algorithm with the same initial requests. Table 4 compares the approaches using different learning algorithm. When the large parts of inventory is compatible with the customer``s preferences as in the first test case, the performance of all techniques are nearly same (e.g., Scenario 1). As the number of compatible services drops, RO performs poorly as expected. The second worst method is SCR since it only considers the customer``s most recent request and does not learn from previous requests. CEA gives the best results when it can generate an answer but cannot handle the cases containing disjunctive preferences, such as the one in Scenario 5. ID3 and DCEA achieve the best results. Their performance is comparable and they can handle all cases including Scenario 5. Table 4: Comparison of learning algorithms in terms of average number of interactions Run DCEA SCR RO CEA ID3 Scenario 1: 1.2 1.4 1.2 1.2 1.2 Scenario 2: 1.4 1.4 2.6 1.4 1.4 Scenario 3: 1.4 1.8 4.4 1.4 1.4 Scenario 4: 2.2 2.8 9.6 1.8 2 Scenario 5: 2 2.6 7.6 1.75+ No offer 1.8 Avg. of all cases: 1.64 2 5.08 1.51+No offer 1.56 7.2 Comparison of Similarity Metrics To compare the similarity metrics that were explained in Section 5, we fix the learning algorithm to DCEA. In addition to the scenarios shown in Table 3, we add following five new scenarios considering the hierarchical information. • The customer wants to buy wine whose winery is located in California and whose grape is a type of white grape. Moreover, the winery of the wine should not be expensive. There are only four products meeting these conditions. • The customer wants to buy wine whose color is red or rose and grape type is red grape. In addition, the location of wine should be in Europe. The sweetness degree is wished to be dry or off dry. The flavor should be delicate or moderate where the body should be medium or light. Furthermore, the winery of the wine should be an expensive winery. There are two products meeting all these requirements. • The customer wants to buy moderate rose wine, which is located around French Region. The category of winery should be Moderate Winery. There is only one product meeting these requirements. • The customer wants to buy expensive red wine, which is located around California Region or cheap white wine, which is located in around Texas Region. There are five available products. • The customer wants to buy delicate white wine whose producer in the category of Expensive Winery. There are two available products. The first seven scenarios are tested with the first dataset that contains a total of 19 services and the last three scenarios are tested with the second dataset that contains 50 services. Table 5 gives the performance evaluation in terms of the number of interactions needed to reach a consensus. Tversky``s metric gives the worst results since it does not consider the semantic similarity. Lin``s performance are better than Tversky but worse than others. Wu Palmer``s metric and RP similarity measure nearly give the same performance and better than others. When the results are examined, considering semantic closeness increases the performance. 8. DISCUSSION We review the recent literature in comparison to our work. Tama et al. [16] propose a new approach based on ontology for negotiation. According to their approach, the negotiation protocols used in e-commerce can be modeled as ontologies. Thus, the agents can perform negotiation protocol by using this shared ontology without the need of being hard coded of negotiation protocol details. While The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1307 Table 5: Comparison of similarity metrics in terms of number of interactions Run Tversky Lin Wu Palmer RP Scenario 1: 1.2 1.2 1 1 Scenario 2: 1.4 1.4 1.6 1.6 Scenario 3: 1.4 1.8 2 2 Scenario 4: 2.2 1 1.2 1.2 Scenario 5: 2 1.6 1.6 1.6 Scenario 6: 5 3.8 2.4 2.6 Scenario 7: 3.2 1.2 1 1 Scenario 8: 5.6 2 2 2.2 Scenario 9: 2.6 2.2 2.2 2.6 Scenario 10: 4.4 2 2 1.8 Average of all cases: 2.9 1.82 1.7 1.76 Tama et al. model the negotiation protocol using ontologies, we have instead modeled the service to be negotiated. Further, we have built a system with which negotiation preferences can be learned. Sadri et al. study negotiation in the context of resource allocation [14]. Agents have limited resources and need to require missing resources from other agents. A mechanism which is based on dialogue sequences among agents is proposed as a solution. The mechanism relies on observe-think-action agent cycle. These dialogues include offering resources, resource exchanges and offering alternative resource. Each agent in the system plans its actions to reach a goal state. Contrary to our approach, Sadri et al.``s study is not concerned with learning preferences of each other. Brzostowski and Kowalczyk propose an approach to select an appropriate negotiation partner by investigating previous multi-attribute negotiations [1]. For achieving this, they use case-based reasoning. Their approach is probabilistic since the behavior of the partners can change at each iteration. In our approach, we are interested in negotiation the content of the service. After the consumer and producer agree on the service, price-oriented negotiation mechanisms can be used to agree on the price. Fatima et al. study the factors that affect the negotiation such as preferences, deadline, price and so on, since the agent who develops a strategy against its opponent should consider all of them [5]. In their approach, the goal of the seller agent is to sell the service for the highest possible price whereas the goal of the buyer agent is to buy the good with the lowest possible price. Time interval affects these agents differently. Compared to Fatima et al. our focus is different. While they study the effect of time on negotiation, our focus is on learning preferences for a successful negotiation. Faratin et al. propose a multi-issue negotiation mechanism, where the service variables for the negotiation such as price, quality of the service, and so on are considered traded-offs against each other (i.e., higher price for earlier delivery) [4]. They generate a heuristic model for trade-offs including fuzzy similarity estimation and a hill-climbing exploration for possibly acceptable offers. Although we address a similar problem, we learn the preferences of the customer by the help of inductive learning and generate counter-offers in accordance with these learned preferences. Faratin et al. only use the last offer made by the consumer in calculating the similarity for choosing counter offer. Unlike them, we also take into account the previous requests of the consumer. In their experiments, Faratin et al. assume that the weights for service variables are fixed a priori. On the contrary, we learn these preferences over time. In our future work, we plan to integrate ontology reasoning into the learning algorithm so that hierarchical information can be learned from subsumption hierarchy of relations. Further, by using relationships among features, the producer can discover new knowledge from the existing knowledge. These are interesting directions that we will pursue in our future work. 9. REFERENCES [1] J. Brzostowski and R. Kowalczyk. On possibilistic case-based reasoning for selecting partners for multi-attribute agent negotiation. In Proceedings of the 4th Intl.. Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS), pages 273-278, 2005. [2] L. Busch and I. Horstman. A comment on issue-by-issue negotiations. Games and Economic Behavior, 19:144-148, 1997. [3] J. K. Debenham. Managing e-market negotiation in context with a multiagent system. In Proceedings 21st International Conference on Knowledge Based Systems and Applied Artificial Intelligence, ES``2002:, 2002. [4] P. Faratin, C. Sierra, and N. R. Jennings. Using similarity criteria to make issue trade-offs in automated negotiations. Artificial Intelligence, 142:205-237, 2002. [5] S. Fatima, M. Wooldridge, and N. Jennings. Optimal agents for multi-issue negotiation. In Proceeding of the 2nd Intl.. Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS), pages 129-136, 2003. [6] C. Giraud-Carrier. A note on the utility of incremental learning. AI Communications, 13(4):215-223, 2000. [7] T.-P. Hong and S.-S. Tseng. Splitting and merging version spaces to learn disjunctive concepts. IEEE Transactions on Knowledge and Data Engineering, 11(5):813-815, 1999. [8] D. Lin. An information-theoretic definition of similarity. In Proc. 15th International Conf. on Machine Learning, pages 296-304. Morgan Kaufmann, San Francisco, CA, 1998. [9] P. Maes, R. H. Guttman, and A. G. Moukas. Agents that buy and sell. Communications of the ACM, 42(3):81-91, 1999. [10] T. M. Mitchell. Machine Learning. McGraw Hill, NY, 1997. [11] OWL. OWL: Web ontology language guide, 2003. http://www.w3.org/TR/2003/CR-owl-guide-20030818/. [12] S. K. Pal and S. C. K. Shiu. Foundations of Soft Case-Based Reasoning. John Wiley & Sons, New Jersey, 2004. [13] J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81-106, 1986. [14] F. Sadri, F. Toni, and P. Torroni. Dialogues for negotiation: Agent varieties and dialogue sequences. In ATAL 2001, Revised Papers, volume 2333 of LNAI, pages 405-421. Springer-Verlag, 2002. [15] M. P. Singh. Value-oriented electronic commerce. IEEE Internet Computing, 3(3):6-7, 1999. [16] V. Tamma, S. Phelps, I. Dickinson, and M. Wooldridge. Ontologies for supporting negotiation in e-commerce. Engineering Applications of Artificial Intelligence, 18:223-236, 2005. [17] A. Tversky. Features of similarity. Psychological Review, 84(4):327-352, 1977. [18] P. E. Utgoff. Incremental induction of decision trees. Machine Learning, 4:161-186, 1989. [19] Wine, 2003. http://www.w3.org/TR/2003/CR-owl-guide20030818/wine.rdf. [20] Z. Wu and M. Palmer. Verb semantics and lexical selection. In 32nd. Annual Meeting of the Association for Computational Linguistics, pages 133 -138, 1994. 1308 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Learning Consumer Preferences Using Semantic Similarity ∗ Reyhan Aydo˘gan Pınar Yolum ABSTRACT In online, dynamic environments, the services requested by consumers may not be readily served by the providers. This requires the service consumers and providers to negotiate their service needs and offers. Multiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price. In contrast, this work develops an approach through which the parties can negotiate the content of a service. This calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other's preferences incrementally over time. Accordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service. Through repetitive interactions, the provider learns consumers' needs accurately and can make better targeted offers. To enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques. We further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics. 1. INTRODUCTION Current approaches to e-commerce treat service price as the primary construct for negotiation by assuming that the service content is fixed [9]. However, negotiation on price presupposes that other properties of the service have already been agreed upon. Nevertheless, many times the service provider may not be offering the exact requested service due to lack of resources, constraints in its business policy, and so on [3]. When this is the case, the producer and the consumer need to negotiate the content of the requested service [15]. However, most existing negotiation approaches assume that all features of a service are equally important and concentrate on the price [5, 2]. However, in reality not all features may be relevant and the relevance of a feature may vary from consumer to consumer. For instance, completion time of a service may be important for one consumer whereas the quality of the service may be more important for a second consumer. Without doubt, considering the preferences of the consumer has a positive impact on the negotiation process. For this purpose, evaluation of the service components with different weights can be useful. Some studies take these weights as a priori and uses the fixed weights [4]. On the other hand, mostly the producer does not know the consumer's preferences before the negotiation. Hence, it is more appropriate for the producer to learn these preferences for each consumer. Preference Learning: As an alternative, we propose an architecture in which the service providers learn the relevant features of a service for a particular customer over time. We represent service requests as a vector of service features. We use an ontology in order to capture the relations between services and to construct the features for a given service. By using a common ontology, we enable the consumers and producers to share a common vocabulary for negotiation. The particular service we have used is a wine selling service. The wine seller learns the wine preferences of the customer to sell better targeted wines. The producer models the requests of the consumer and its counter offers to learn which features are more important for the consumer. Since no information is present before the interactions start, the learning algorithm has to be incremental so that it can be trained at run time and can revise itself with each new interaction. Service Generation: Even after the producer learns the important features for a consumer, it needs a method to generate offers that are the most relevant for the consumer among its set of possible services. In other words, the question is how the producer uses the information that was learned from the dialogues to make the best offer to the consumer. For instance, assume that the producer has learned that the consumer wants to buy a red wine but the producer can only offer rose or white wine. What should the producer's offer contain; white wine or rose wine? If the producer has some domain knowledge about semantic similarity (e.g., knows that the red and rose wines are taste-wise more similar than white wine), then it can generate better offers. However, in addition to domain knowledge, this derivation requires appropriate metrics to measure similarity between available services and learned preferences. The rest of this paper is organized as follows: Section 2 explains our proposed architecture. Section 3 explains the learning algorithms that were studied to learn consumer preferences. Section 4 studies the different service offering mechanisms. Section 5 contains the similarity metrics used in the experiments. The details of the developed system is analyzed in Section 6. Section 7 provides our experimental setup, test cases, and results. Finally, Section 8 discusses and compares our work with other related work. 2. ARCHITECTURE Our main components are consumer and producer agents, which communicate with each other to perform content-oriented negotiation. Figure 1 depicts our architecture. The consumer agent represents the customer and hence has access to the preferences of the customer. The consumer agent generates requests in accordance with these preferences and negotiates with the producer based on these preferences. Similarly, the producer agent has access to the producer's inventory and knows which wines are available or not. A shared ontology provides the necessary vocabulary and hence enables a common language for agents. This ontology describes the content of the service. Further, since an ontology can represent concepts, their properties and their relationships semantically, the agents can reason the details of the service that is being negotiated. Since a service can be anything such as selling a car, reserving a hotel room, and so on, the architecture is independent of the ontology used. However, to make our discussion concrete, we use the well-known Wine ontology [19] with some modification to illustrate our ideas and to test our system. The wine ontology describes different types of wine and includes features such as color, body, winery of the wine and so on. With this ontology, the service that is being negotiated between the consumer and the producer is that of selling wine. The data repository in Figure 1 is used solely by the producer agent and holds the inventory information of the producer. The data repository includes information on the products the producer owns, the number of the products and ratings of those products. Ratings indicate the popularity of the products among customers. Those are used to decide which product will be offered when there exists more than one product having same similarity to the request of the consumer agent. The negotiation takes place in a turn-taking fashion, where the consumer agent starts the negotiation with a particular service request. The request is composed of significant features of the service. In the wine example, these features include color, winery and so on. This is the particular wine that the customer is interested in purchasing. If the producer has the requested wine in its inventory, the producer offers the wine and the negotiation ends. Otherwise, the producer offers an alternative wine from the inventory. When the consumer receives a counter offer from the producer, it will evaluate it. If it is acceptable, then the negotiation will end. Otherwise, the customer will generate a new request or stick to the previous request. This process will continue until some service is accepted by the consumer agent or all possible offers are put forward to the consumer by the producer. One of the crucial challenges of the content-oriented negotiation is the automatic generation of counter offers by the service producer. When the producer constructs its offer, it should consider Figure 1: Proposed Negotiation Architecture three important things: the current request, consumer preferences and the producer's available services. Both the consumer's current request and the producer's own available services are accessible by the producer. However, the consumer's preferences in most cases will not be available. Hence, the producer will have to understand the needs of the consumer from their interactions and generate a counter offer that is likely to be accepted by the consumer. This challenge can be studied in three stages: • Preference Learning: How can the producers learn about each customer's preferences based on requests and counter offers? (Section 3) • Service Offering: How can the producers revise their offers based on the consumer's preferences that they have learned so far? (Section 4) • Similarity Estimation: How can the producer agent estimate similarity between the request and available services? (Section 5) 3. PREFERENCE LEARNING The requests of the consumer and the counter offers of the producer are represented as vectors, where each element in the vector corresponds to the value of a feature. The requests of the consumers represent individual wine products whereas their preferences are constraints over service features. For example, a consumer may have preference for red wine. This means that the consumer is willing to accept any wine offered by the producers as long as the color is red. Accordingly, the consumer generates a request where the color feature is set to red and other features are set to arbitrary values, e.g. (Medium, Strong, Red). At the beginning of negotiation, the producer agent does not know the consumer's preferences but will need to learn them using information obtained from the dialogues between the producer and the consumer. The preferences denote the relative importance of the features of the services demanded by the consumer agents. For instance, the color of the wine may be important so the consumer insists on buying the wine whose color is red and rejects all 1302 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: How DCEA works the offers involving the wine whose color is white or rose. On the contrary, the winery may not be as important as the color for this customer, so the consumer may have a tendency to accept wines from any winery as long as the color is red. To tackle this problem, we propose to use incremental learning algorithms [6]. This is necessary since no training data is available before the interactions start. We particularly investigate two approaches. The first one is inductive learning. This technique is applied to learn the preferences as concepts. We elaborate on Candidate Elimination Algorithm (CEA) for Version Space [10]. CEA is known to perform poorly if the information to be learned is disjunctive. Interestingly, most of the time consumer preferences are disjunctive. Say, we are considering an agent that is buying wine. The consumer may prefer red wine or rose wine but not white wine. To use CEA with such preferences, a solid modification is necessary. The second approach is decision trees. Decision trees can learn from examples easily and classify new instances as positive or negative. A well-known incremental decision tree is ID5R [18]. However, ID5R is known to suffer from high computational complexity. For this reason, we instead use the ID3 algorithm [13] and iteratively build decision trees to simulate incremental learning. 3.1 CEA CEA [10] is one of the inductive learning algorithms that learns concepts from observed examples. The algorithm maintains two sets to model the concept to be learned. The first set is the most general set G. G contains hypotheses about all the possible values that the concept may obtain. As the name suggests, it is a generalization and contains all possible values unless the values have been identified not to represent the concept. The second set is the most specific set S. S contains only hypotheses that are known to identify the concept that is being learned. At the beginning of the algorithm, G is initialized to cover all possible concepts while S is initialized to be empty. During the interactions, each request of the consumer can be considered as a positive example and each counter offer generated by the producer and rejected by the consumer agent can be thought of as a negative example. At each interaction between the producer and the consumer, both G and S are modified. The negative samples enforce the specialization of some hypotheses so that G does not cover any hypothesis accepting the negative samples as positive. When a positive sample comes, the most specific set S should be generalized in order to cover the new training instance. As a result, the most general hypotheses and the most special hypotheses cover all positive training samples but do not cover any negative ones. Incrementally, G specializes and S generalizes until G and S are equal to each other. When these sets are equal, the algorithm converges by means of reaching the target concept. 3.2 Disjunctive CEA Unfortunately, CEA is primarily targeted for conjunctive concepts. On the other hand, we need to learn disjunctive concepts in the negotiation of a service since consumer may have several alternative wishes. There are several studies on learning disjunctive concepts via Version Space. Some of these approaches use multiple version space. For instance, Hong et al. maintain several version spaces by split and merge operation [7]. To be able to learn disjunctive concepts, they create new version spaces by examining the consistency between G and S. We deal with the problem of not supporting disjunctive concepts of CEA by extending our hypothesis language to include disjunctive hypothesis in addition to the conjunctives and negation. Each attribute of the hypothesis has two parts: inclusive list, which holds the list of valid values for that attribute and exclusive list, which is the list of values which cannot be taken for that feature. EXAMPLE 1. Assume that the most specific set is {(Light, Delicate, Red)} and a positive example, (Light, Delicate, White) comes. The original CEA will generalize this as (Light, Delicate,?) , meaning the color can take any value. However, in fact, we only know that the color can be red or white. In the DCEA, we generalize it as {(Light, Delicate, [White, Red])}. Only when all the values exist in the list, they will be replaced by? . In other words, we let the algorithm generalize more slowly than before. We modify the CEA algorithm to deal with this change. The modified algorithm, DCEA, is given as Algorithm 1. Note that compared to the previous studies of disjunctive versions, our approach uses only a single version space rather than multiple version space. The initialization phase is the same as the original algorithm (lines 1, 2). If any positive sample comes, we add the sample to the special set as before (line 4). However, we do not eliminate the hypotheses in G that do not cover this sample since G now contains a disjunction of many hypotheses, some of which will be conflicting with each other. Removing a specific hypothesis from G will result in loss of information, since other hypotheses are not guaranteed to cover it. After some time, some hypotheses in S can be merged and can construct one hypothesis (lines 6, 7). When a negative sample comes, we do not change S as before. We only modify the most general hypotheses not to cover this negative sample (lines 11--15). Different from the original CEA, we try to specialize the G minimally. The algorithm removes the hypothesis covering the negative sample (line 13). Then, we generate new hypotheses as the number of all possible attributes by using the removed hypothesis. For each attribute in the negative sample, we add one of them at each time to the exclusive list of the removed hypothesis. Thus, all possible hypotheses that do not cover the negative sample are generated (line 14). Note that, exclusive list contains the values that the attribute cannot take. For example, consider the color attribute. If a hypothesis includes red in its exclusive list and? in its inclusive list, this means that color may take any value except red. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1303 Algorithm 1 Disjunctive Candidate Elimination Algorithm 1: G +--the set of maximally general hypotheses in H 2: S +--the set of maximally specific hypotheses in H 3: For each training example, d 4: if d is a positive example then 5: Add d to S 6: if s in S can be combined with d to make one element then 7: Combine s and d into sd {sd is the rule covers s and d} 8: end if 9: end if 10: if d is a negative example then 11: For each hypothesis g in G does cover d 12: * Assume: g = (x1, x2,..., xn) and d = (d1, d2,..., dn) 13:--Remove g from G 14:--Add hypotheses g1, g2, gn where g1 = (x1-d1, x2,..., xn), g2 = (x1, x2-d2,..., xn),..., and gn = (x1, x2,..., xn-dn) 15:--Remove from G any hypothesis that is less general than another hypothesis in G 16: end if EXAMPLE 2. Table 1 illustrates the first three interactions and the workings of DCEA. The most general set and the most specific set show the contents of G and S after the sample comes in. After the first positive sample, S is generalized to also cover the instance. The second sample is negative. Thus, we replace (? ,? ,?) by three disjunctive hypotheses; each hypothesis being minimally specialized. In this process, at each time one attribute value of negative sample is applied to the hypothesis in the general set. The third sample is positive and generalizes S even more. Note that in Table 1, we do not eliminate {(? - Full),? ,?} from the general set while having a positive sample such as (Full, Strong, White). This stems from the possibility of using this rule in the generation of other hypotheses. For instance, if the example continues with a negative sample (Full, Strong, Red), we can specialize the previous rule such as {(? - Full),? , (? - Red)}. By Algorithm 1, we do not miss any information. 3.3 ID3 ID3 [13] is an algorithm that constructs decision trees in a topdown fashion from the observed examples represented in a vector with attribute-value pairs. Applying this algorithm to our system with the intention of learning the consumer's preferences is appropriate since this algorithm also supports learning disjunctive concepts in addition to conjunctive concepts. The ID3 algorithm is used in the learning process with the purpose of classification of offers. There are two classes: positive and negative. Positive means that the service description will possibly be accepted by the consumer agent whereas the negative implies that it will potentially be rejected by the consumer. Consumer's requests are considered as positive training examples and all rejected counter-offers are thought as negative ones. The decision tree has two types of nodes: leaf node in which the class labels of the instances are held and non-leaf nodes in which test attributes are held. The test attribute in a non-leaf node is one of the attributes making up the service description. For instance, body, flavor, color and so on are potential test attributes for wine service. When we want to find whether the given service description is acceptable, we start searching from the root node by examining the value of test attributes until reaching a leaf node. The problem with this algorithm is that it is not an incremental algorithm, which means all the training examples should exist before learning. To overcome this problem, the system keeps consumer's requests throughout the negotiation interaction as positive examples and all counter-offers rejected by the consumer as negative examples. After each coming request, the decision tree is rebuilt. Without doubt, there is a drawback of reconstruction such as additional process load. However, in practice we have evaluated ID3 to be fast and the reconstruction cost to be negligible. 4. SERVICE OFFERING After learning the consumer's preferences, the producer needs to make a counter offer that is compatible with the consumer's preferences. 4.1 Service Offering via CEA and DCEA To generate the best offer, the producer agent uses its service ontology and the CEA algorithm. The service offering mechanism is the same for both the original CEA and DCEA, but as explained before their methods for updating G and S are different. When producer receives a request from the consumer, the learning set of the producer is trained with this request as a positive sample. The learning components, the most specific set S and the most general set G are actively used in offering service. The most general set, G is used by the producer in order to avoid offering the services, which will be rejected by the consumer agent. In other words, it filters the service set from the undesired services, since G contains hypotheses that are consistent with the requests of the consumer. The most specific set, S is used in order to find best offer, which is similar to the consumer's preferences. Since the most specific set S holds the previous requests and the current request, estimating similarity between this set and every service in the service list is very convenient to find the best offer from the service list. When the consumer starts the interaction with the producer agent, producer agent loads all related services to the service list object. This list constitutes the provider's inventory of services. Upon receiving a request, if the producer can offer an exactly matching service, then it does so. For example, for a wine this corresponds to selling a wine that matches the specified features of the consumer's request identically. When the producer cannot offer the service as requested, it tries to find the service that is most similar to the services that have been requested by the consumer during the negotiation. To do this, the producer has to compute the similarity between the services it can offer and the services that have been requested (in S). We compute the similarities in various ways as will be explained in Section 5. After the similarity of the available services with the current S is calculated, there may be more than one service with the maximum similarity. The producer agent can break the tie in a number of ways. Here, we have associated a rating value with each service and the producer prefers the higher rated service to others. 4.2 Service Offering via ID3 If the producer learns the consumer's preferences with ID3, a similar mechanism is applied with two differences. First, since ID3 does not maintain G, the list of unaccepted services that are classified as negative are removed from the service list. Second, the similarities of possible services are not measured with respect to S, but instead to all previously made requests. 4.3 Alternative Service Offering Mechanisms In addition to these three service offering mechanisms (Service Offering with CEA, Service Offering with DCEA, and Service Offering with ID3), we include two other mechanisms. . 1304 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) • Random Service Offering (RO): The producer generates a counter offer randomly from the available service list, without considering the consumer's preferences. • Service Offering considering only the current request (SCR): The producer selects a counter offer according to the similarity of the consumer's current request but does not consider previous requests. 5. SIMILARITY ESTIMATION Similarity can be estimated with a similarity metric that takes two entries and returns how similar they are. There are several similarity metrics used in case based reasoning system such as weighted sum of Euclidean distance, Hamming distance and so on [12]. The similarity metric affects the performance of the system while deciding which service is the closest to the consumer's request. We first analyze some existing metrics and then propose a new semantic similarity metric named RP Similarity. 5.1 Tversky's Similarity Metric Tversky's similarity metric compares two vectors in terms of the number of exactly matching features [17]. In Equation (1), common represents the number of matched attributes whereas different represents the number of the different attributes. Our current assumption is that α and β is equal to each other. Here, when two features are compared, we assign zero for dissimilarity and one for similarity by omitting the semantic closeness among the feature values. Tversky's similarity metric is designed to compare two feature vectors. In our system, whereas the list of services that can be offered by the producer are each a feature vector, the most specific set S is not a feature vector. S consists of hypotheses of feature vectors. Therefore, we estimate the similarity of each hypothesis inside the most specific set S and then take the average of the similarities. EXAMPLE 3. Assume that S contains the following two hypothesis: {{Light, Moderate, (Red, White)}, {Full, Strong, Rose}}. Take service s as (Light, Strong, Rose). Then the similarity of the first one is equal to 1/3 and the second one is equal to 2/3 in accordance with Equation (1). Normally, we take the average of it and obtain (1/3 + 2/3) / 2, equally 1/2. However, the first hypothesis involves the effect of two requests and the second hypothesis involves only one request. As a result, we expect the effect of the first hypothesis to be greater than that of the second. Therefore, we calculate the average similarity by considering the number of samples that hypotheses cover. Let ch denote the number of samples that hypothesis h covers and (SM (h, service)) denote the similarity of hypothesis h with the given service. We compute the similarity of each hypothesis with the given service and weight them with the number of samples they cover. We find the similarity by dividing the weighted sum of the similarities of all hypotheses in S with the service by the number of all samples that are covered in S. Figure 2: Sample taxonomy for similarity estimation EXAMPLE 4. For the above example, the similarity of (Light, Strong, Rose) with the specific set is (2 * 1/3 + 2/3) / 3, equally 4/9. The possible number of samples that a hypothesis covers can be estimated with multiplying cardinalities of each attribute. For example, the cardinality of the first attribute is two and the others is equal to one for the given hypothesis such as {Light, Moderate, (Red, White)}. When we multiply them, we obtain two (2 * 1 * 1 = 2). 5.2 Lin's Similarity Metric A taxonomy can be used while estimating semantic similarity between two concepts. Estimating semantic similarity in a Is-A taxonomy can be done by calculating the distance between the nodes related to the compared concepts. The links among the nodes can be considered as distances. Then, the length of the path between the nodes indicates how closely similar the concepts are. An alternative estimation to use information content in estimation of semantic similarity rather than edge counting method, was proposed by Lin [8]. The equation (3) [8] shows Lin's similarity where c1 and c2 are the compared concepts and c0 is the most specific concept that subsumes both of them. Besides, P (C) represents the probability of an arbitrary selected object belongs to concept C. 5.3 Wu & Palmer's Similarity Metric Different from Lin, Wu and Palmer use the distance between the nodes in IS-A taxonomy [20]. The semantic similarity is represented with Equation (4) [20]. Here, the similarity between c1 and c2 is estimated and c0 is the most specific concept subsuming these classes. N1 is the number of edges between c1 and c0. N2 is the number of edges between c2 and c0. N0 is the number of IS-A links of c0 from the root of the taxonomy. 5.4 RP Semantic Metric We propose to estimate the relative distance in a taxonomy between two concepts using the following intuitions. We use Figure 2 to illustrate these intuitions. • Parent versus grandparent: Parent of a node is more similar to the node than grandparents of that. Generalization of The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1305 a concept reasonably results in going further away that concept. The more general concepts are, the less similar they are. For example, AnyWineColor is parent of ReddishColor and ReddishColor is parent of Red. Then, we expect the similarity between ReddishColor and Red to be higher than that of the similarity between AnyWineColor and Red. • Parent versus sibling: A node would have higher similarity to its parent than to its sibling. For instance, Red and Rose are children of ReddishColor. In this case, we expect the similarity between Red and ReddishColor to be higher than that of Red and Rose. • Sibling versus grandparent: A node is more similar to it 's sibling then to its grandparent. To illustrate, AnyWineColor is grandparent of Red, and Red and Rose are siblings. Therefore, we possibly anticipate that Red and Rose are more similar than AnyWineColor and Red. As a taxonomy is represented in a tree, that tree can be traversed from the first concept being compared through the second concept. At starting node related to the first concept, the similarity value is constant and equal to one. This value is diminished by a constant at each node being visited over the path that will reach to the node including the second concept. The shorter the path between the concepts, the higher the similarity between nodes. 1: Similarity 1 2: if c1 is equal to c2 then 3: Return Similarity 4: end if 5: commonParent findCommonParent (c1, c2) {commonParent is the most specific concept that covers both c1 and c2} 6: N1 findDistance (commonParent, c1) 7: N2 findDistance (commonParent, c2) {N1 & N2 are the number of links between the concept and parent concept} 8: if (commonParent == c1) or (commonParent == c2) then 9: Similarity Similarity * m (N1 + N2) 10: else 11: Similarity Similarity * n * m (N1 + N2 − 2) 12: end if 13: Return Similarity Relative distance between nodes c1 and c2 is estimated in the following way. Starting from c1, the tree is traversed to reach c2. At each hop, the similarity decreases since the concepts are getting farther away from each other. However, based on our intuitions, not all hops decrease the similarity equally. Let m represent the factor for hopping from a child to a parent and n represent the factor for hopping from a sibling to another sibling. Since hopping from a node to its grandparent counts as two parent hops, the discount factor of moving from a node to its grandparent is m2. According to the above intuitions, our constants should be in the form m> n> m2 where the value of m and n should be between zero and one. Algorithm 2 shows the distance calculation. According to the algorithm, firstly the similarity is initialized with the value of one (line 1). If the concepts are equal to each other then, similarity will be one (lines 2-4). Otherwise, we compute the common parent of the two nodes and the distance of each concept to the common parent without considering the sibling (lines 5-7). If one of the concepts is equal to the common parent, then there is no sibling relation between the concepts. For each level, we multiply the similarity by m and do not consider the sibling factor in the similarity estimation. As a result, we decrease the similarity at each level with the rate of m (line9). Otherwise, there has to be a sibling relation. This means that we have to consider the effect of n when measuring similarity. Recall that we have counted N1 + N2 edges between the concepts. Since there is a sibling relation, two of these edges constitute the sibling relation. Hence, when calculating the effect of the parent relation, we use N1 + N2--2 edges (line 11). Some similarity estimations related to the taxonomy in Figure 2 are given in Table 2. In this example, m is taken as 2/3 and n is taken as 4/7. Table 2: Sample similarity estimation over sample taxonomy For all semantic similarity metrics in our architecture, the taxonomy for features is held in the shared ontology. In order to evaluate the similarity of feature vector, we firstly estimate the similarity for feature one by one and take the average sum of these similarities. Then the result is equal to the average semantic similarity of the entire feature vector. 6. DEVELOPED SYSTEM We have implemented our architecture in Java. To ease testing of the system, the consumer agent has a user interface that allows us to enter various requests. The producer agent is fully automated and the learning and service offering operations work as explained before. In this section, we explain the implementation details of the developed system. We use OWL [11] as our ontology language and JENA as our ontology reasoner. The shared ontology is the modified version of the Wine Ontology [19]. It includes the description of wine as a concept and different types of wine. All participants of the negotiation use this ontology for understanding each other. According to the ontology, seven properties make up the wine concept. The consumer agent and the producer agent obtain the possible values for the these properties by querying the ontology. Thus, all possible values for the components of the wine concept such as color, body, sugar and so on can be reached by both agents. Also a variety of wine types are described in this ontology such as Burgundy, Chardonnay, CheninBlanc and so on. Intuitively, any wine type described in the ontology also represents a wine concept. This allows us to consider instances of Chardonnay wine as instances of Wine class. In addition to wine description, the hierarchical information of some features can be inferred from the ontology. For instance, we can represent the information Europe Continent covers Western Country. Western Country covers French Region, which covers some territories such as Loire, Bordeaux and so on. This hierarchical information is used in estimation of semantic similarity. In this part, some reasoning can be made such as if a concept X covers Y and Y covers Z, then concept X covers Z. For example, Europe Continent covers Bordeaux. 1306 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) For some features such as body, flavor and sugar, there is no hierarchical information, but their values are semantically leveled. When that is the case, we give the reasonable similarity values for these features. For example, the body can be light, medium, or strong. In this case, we assume that light is 0.66 similar to medium but only 0.33 to strong. WineStock Ontology is the producer's inventory and describes a product class as WineProduct. This class is necessary for the producer to record the wines that it sells. Ontology involves the individuals of this class. The individuals represent available services that the producer owns. We have prepared two separate WineStock ontologies for testing. In the first ontology, there are 19 available wine products and in the second ontology, there are 50 products. 7. PERFORMANCE EVALUATION We evaluate the performance of the proposed systems in respect to learning technique they used, DCEA and ID3, by comparing them with the CEA, RO (for random offering), and SCR (offering based on current request only). We apply a variety of scenarios on this dataset in order to see the performance differences. Each test scenario contains a list of preferences for the user and number of matches from the product list. Table 3 shows these preferences and availability of those products in the inventory for first five scenarios. Note that these preferences are internal to the consumer and the producer tries to learn these during negotiation. Table 3: Availability of wines in different test scenarios 7.1 Comparison of Learning Algorithms In comparison of learning algorithms, we use the five scenarios in Table 3. Here, first we use Tversky's similarity measure. With these test cases, we are interested in finding the number of iterations that are required for the producer to generate an acceptable offer for the consumer. Since the performance also depends on the initial request, we repeat our experiments with different initial requests. Consequently, for each case, we run the algorithms five times with several variations of the initial requests. In each experiment, we count the number of iterations that were needed to reach an agreement. We take the average of these numbers in order to evaluate these systems fairly. As is customary, we test each algorithm with the same initial requests. Table 4 compares the approaches using different learning algorithm. When the large parts of inventory is compatible with the customer's preferences as in the first test case, the performance of all techniques are nearly same (e.g., Scenario 1). As the number of compatible services drops, RO performs poorly as expected. The second worst method is SCR since it only considers the customer's most recent request and does not learn from previous requests. CEA gives the best results when it can generate an answer but cannot handle the cases containing disjunctive preferences, such as the one in Scenario 5. ID3 and DCEA achieve the best results. Their performance is comparable and they can handle all cases including Scenario 5. Table 4: Comparison of learning algorithms in terms of average number of interactions 7.2 Comparison of Similarity Metrics To compare the similarity metrics that were explained in Section 5, we fix the learning algorithm to DCEA. In addition to the scenarios shown in Table 3, we add following five new scenarios considering the hierarchical information. • The customer wants to buy wine whose winery is located in California and whose grape is a type of white grape. Moreover, the winery of the wine should not be expensive. There are only four products meeting these conditions. • The customer wants to buy wine whose color is red or rose and grape type is red grape. In addition, the location of wine should be in Europe. The sweetness degree is wished to be dry or off dry. The flavor should be delicate or moderate where the body should be medium or light. Furthermore, the winery of the wine should be an expensive winery. There are two products meeting all these requirements. • The customer wants to buy moderate rose wine, which is located around French Region. The category of winery should be Moderate Winery. There is only one product meeting these requirements. • The customer wants to buy expensive red wine, which is located around California Region or cheap white wine, which is located in around Texas Region. There are five available products. • The customer wants to buy delicate white wine whose pro ducer in the category of Expensive Winery. There are two available products. The first seven scenarios are tested with the first dataset that contains a total of 19 services and the last three scenarios are tested with the second dataset that contains 50 services. Table 5 gives the performance evaluation in terms of the number of interactions needed to reach a consensus. Tversky's metric gives the worst results since it does not consider the semantic similarity. Lin's performance are better than Tversky but worse than others. Wu Palmer's metric and RP similarity measure nearly give the same performance and better than others. When the results are examined, considering semantic closeness increases the performance. 8. DISCUSSION We review the recent literature in comparison to our work. Tama et al. [16] propose a new approach based on ontology for negotiation. According to their approach, the negotiation protocols used in e-commerce can be modeled as ontologies. Thus, the agents can perform negotiation protocol by using this shared ontology without the need of being hard coded of negotiation protocol details. While The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1307 Table 5: Comparison of similarity metrics in terms of number of interactions Tama et al. model the negotiation protocol using ontologies, we have instead modeled the service to be negotiated. Further, we have built a system with which negotiation preferences can be learned. Sadri et al. study negotiation in the context of resource allocation [14]. Agents have limited resources and need to require missing resources from other agents. A mechanism which is based on dialogue sequences among agents is proposed as a solution. The mechanism relies on observe-think-action agent cycle. These dialogues include offering resources, resource exchanges and offering alternative resource. Each agent in the system plans its actions to reach a goal state. Contrary to our approach, Sadri et al.'s study is not concerned with learning preferences of each other. Brzostowski and Kowalczyk propose an approach to select an appropriate negotiation partner by investigating previous multi-attribute negotiations [1]. For achieving this, they use case-based reasoning. Their approach is probabilistic since the behavior of the partners can change at each iteration. In our approach, we are interested in negotiation the content of the service. After the consumer and producer agree on the service, price-oriented negotiation mechanisms can be used to agree on the price. Fatima et al. study the factors that affect the negotiation such as preferences, deadline, price and so on, since the agent who develops a strategy against its opponent should consider all of them [5]. In their approach, the goal of the seller agent is to sell the service for the highest possible price whereas the goal of the buyer agent is to buy the good with the lowest possible price. Time interval affects these agents differently. Compared to Fatima et al. our focus is different. While they study the effect of time on negotiation, our focus is on learning preferences for a successful negotiation. Faratin et al. propose a multi-issue negotiation mechanism, where the service variables for the negotiation such as price, quality of the service, and so on are considered traded-offs against each other (i.e., higher price for earlier delivery) [4]. They generate a heuristic model for trade-offs including fuzzy similarity estimation and a hill-climbing exploration for possibly acceptable offers. Although we address a similar problem, we learn the preferences of the customer by the help of inductive learning and generate counter-offers in accordance with these learned preferences. Faratin et al. only use the last offer made by the consumer in calculating the similarity for choosing counter offer. Unlike them, we also take into account the previous requests of the consumer. In their experiments, Faratin et al. assume that the weights for service variables are fixed a priori. On the contrary, we learn these preferences over time. In our future work, we plan to integrate ontology reasoning into the learning algorithm so that hierarchical information can be learned from subsumption hierarchy of relations. Further, by using relationships among features, the producer can discover new knowledge from the existing knowledge. These are interesting directions that we will pursue in our future work.
Learning Consumer Preferences Using Semantic Similarity ∗ Reyhan Aydo˘gan Pınar Yolum ABSTRACT In online, dynamic environments, the services requested by consumers may not be readily served by the providers. This requires the service consumers and providers to negotiate their service needs and offers. Multiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price. In contrast, this work develops an approach through which the parties can negotiate the content of a service. This calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other's preferences incrementally over time. Accordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service. Through repetitive interactions, the provider learns consumers' needs accurately and can make better targeted offers. To enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques. We further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics. 1. INTRODUCTION Current approaches to e-commerce treat service price as the primary construct for negotiation by assuming that the service content is fixed [9]. However, negotiation on price presupposes that other properties of the service have already been agreed upon. Nevertheless, many times the service provider may not be offering the exact requested service due to lack of resources, constraints in its business policy, and so on [3]. When this is the case, the producer and the consumer need to negotiate the content of the requested service [15]. However, most existing negotiation approaches assume that all features of a service are equally important and concentrate on the price [5, 2]. However, in reality not all features may be relevant and the relevance of a feature may vary from consumer to consumer. For instance, completion time of a service may be important for one consumer whereas the quality of the service may be more important for a second consumer. Without doubt, considering the preferences of the consumer has a positive impact on the negotiation process. For this purpose, evaluation of the service components with different weights can be useful. Some studies take these weights as a priori and uses the fixed weights [4]. On the other hand, mostly the producer does not know the consumer's preferences before the negotiation. Hence, it is more appropriate for the producer to learn these preferences for each consumer. Preference Learning: As an alternative, we propose an architecture in which the service providers learn the relevant features of a service for a particular customer over time. We represent service requests as a vector of service features. We use an ontology in order to capture the relations between services and to construct the features for a given service. By using a common ontology, we enable the consumers and producers to share a common vocabulary for negotiation. The particular service we have used is a wine selling service. The wine seller learns the wine preferences of the customer to sell better targeted wines. The producer models the requests of the consumer and its counter offers to learn which features are more important for the consumer. Since no information is present before the interactions start, the learning algorithm has to be incremental so that it can be trained at run time and can revise itself with each new interaction. Service Generation: Even after the producer learns the important features for a consumer, it needs a method to generate offers that are the most relevant for the consumer among its set of possible services. In other words, the question is how the producer uses the information that was learned from the dialogues to make the best offer to the consumer. For instance, assume that the producer has learned that the consumer wants to buy a red wine but the producer can only offer rose or white wine. What should the producer's offer contain; white wine or rose wine? If the producer has some domain knowledge about semantic similarity (e.g., knows that the red and rose wines are taste-wise more similar than white wine), then it can generate better offers. However, in addition to domain knowledge, this derivation requires appropriate metrics to measure similarity between available services and learned preferences. The rest of this paper is organized as follows: Section 2 explains our proposed architecture. Section 3 explains the learning algorithms that were studied to learn consumer preferences. Section 4 studies the different service offering mechanisms. Section 5 contains the similarity metrics used in the experiments. The details of the developed system is analyzed in Section 6. Section 7 provides our experimental setup, test cases, and results. Finally, Section 8 discusses and compares our work with other related work. 2. ARCHITECTURE 3. PREFERENCE LEARNING 1302 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.1 CEA 3.2 Disjunctive CEA The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1303 3.3 ID3 4. SERVICE OFFERING 4.1 Service Offering via CEA and DCEA 4.2 Service Offering via ID3 4.3 Alternative Service Offering Mechanisms 1304 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. SIMILARITY ESTIMATION 5.1 Tversky's Similarity Metric 5.2 Lin's Similarity Metric 5.3 Wu & Palmer's Similarity Metric 5.4 RP Semantic Metric The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1305 6. DEVELOPED SYSTEM 1306 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 7. PERFORMANCE EVALUATION 7.1 Comparison of Learning Algorithms 7.2 Comparison of Similarity Metrics 8. DISCUSSION We review the recent literature in comparison to our work. Tama et al. [16] propose a new approach based on ontology for negotiation. According to their approach, the negotiation protocols used in e-commerce can be modeled as ontologies. Thus, the agents can perform negotiation protocol by using this shared ontology without the need of being hard coded of negotiation protocol details. While The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1307 Table 5: Comparison of similarity metrics in terms of number of interactions Tama et al. model the negotiation protocol using ontologies, we have instead modeled the service to be negotiated. Further, we have built a system with which negotiation preferences can be learned. Sadri et al. study negotiation in the context of resource allocation [14]. Agents have limited resources and need to require missing resources from other agents. A mechanism which is based on dialogue sequences among agents is proposed as a solution. The mechanism relies on observe-think-action agent cycle. These dialogues include offering resources, resource exchanges and offering alternative resource. Each agent in the system plans its actions to reach a goal state. Contrary to our approach, Sadri et al.'s study is not concerned with learning preferences of each other. Brzostowski and Kowalczyk propose an approach to select an appropriate negotiation partner by investigating previous multi-attribute negotiations [1]. For achieving this, they use case-based reasoning. Their approach is probabilistic since the behavior of the partners can change at each iteration. In our approach, we are interested in negotiation the content of the service. After the consumer and producer agree on the service, price-oriented negotiation mechanisms can be used to agree on the price. Fatima et al. study the factors that affect the negotiation such as preferences, deadline, price and so on, since the agent who develops a strategy against its opponent should consider all of them [5]. In their approach, the goal of the seller agent is to sell the service for the highest possible price whereas the goal of the buyer agent is to buy the good with the lowest possible price. Time interval affects these agents differently. Compared to Fatima et al. our focus is different. While they study the effect of time on negotiation, our focus is on learning preferences for a successful negotiation. Faratin et al. propose a multi-issue negotiation mechanism, where the service variables for the negotiation such as price, quality of the service, and so on are considered traded-offs against each other (i.e., higher price for earlier delivery) [4]. They generate a heuristic model for trade-offs including fuzzy similarity estimation and a hill-climbing exploration for possibly acceptable offers. Although we address a similar problem, we learn the preferences of the customer by the help of inductive learning and generate counter-offers in accordance with these learned preferences. Faratin et al. only use the last offer made by the consumer in calculating the similarity for choosing counter offer. Unlike them, we also take into account the previous requests of the consumer. In their experiments, Faratin et al. assume that the weights for service variables are fixed a priori. On the contrary, we learn these preferences over time. In our future work, we plan to integrate ontology reasoning into the learning algorithm so that hierarchical information can be learned from subsumption hierarchy of relations. Further, by using relationships among features, the producer can discover new knowledge from the existing knowledge. These are interesting directions that we will pursue in our future work.
Learning Consumer Preferences Using Semantic Similarity ∗ Reyhan Aydo˘gan Pınar Yolum ABSTRACT In online, dynamic environments, the services requested by consumers may not be readily served by the providers. This requires the service consumers and providers to negotiate their service needs and offers. Multiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price. In contrast, this work develops an approach through which the parties can negotiate the content of a service. This calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other's preferences incrementally over time. Accordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service. Through repetitive interactions, the provider learns consumers' needs accurately and can make better targeted offers. To enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques. We further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics. 1. INTRODUCTION Current approaches to e-commerce treat service price as the primary construct for negotiation by assuming that the service content is fixed [9]. However, negotiation on price presupposes that other properties of the service have already been agreed upon. Nevertheless, many times the service provider may not be offering the exact requested service due to lack of resources, constraints in its business policy, and so on [3]. When this is the case, the producer and the consumer need to negotiate the content of the requested service [15]. However, most existing negotiation approaches assume that all features of a service are equally important and concentrate on the price [5, 2]. However, in reality not all features may be relevant and the relevance of a feature may vary from consumer to consumer. For instance, completion time of a service may be important for one consumer whereas the quality of the service may be more important for a second consumer. Without doubt, considering the preferences of the consumer has a positive impact on the negotiation process. For this purpose, evaluation of the service components with different weights can be useful. Some studies take these weights as a priori and uses the fixed weights [4]. On the other hand, mostly the producer does not know the consumer's preferences before the negotiation. Hence, it is more appropriate for the producer to learn these preferences for each consumer. Preference Learning: As an alternative, we propose an architecture in which the service providers learn the relevant features of a service for a particular customer over time. We represent service requests as a vector of service features. We use an ontology in order to capture the relations between services and to construct the features for a given service. By using a common ontology, we enable the consumers and producers to share a common vocabulary for negotiation. The particular service we have used is a wine selling service. The wine seller learns the wine preferences of the customer to sell better targeted wines. The producer models the requests of the consumer and its counter offers to learn which features are more important for the consumer. Service Generation: Even after the producer learns the important features for a consumer, it needs a method to generate offers that are the most relevant for the consumer among its set of possible services. In other words, the question is how the producer uses the information that was learned from the dialogues to make the best offer to the consumer. For instance, assume that the producer has learned that the consumer wants to buy a red wine but the producer can only offer rose or white wine. What should the producer's offer contain; white wine or rose wine? However, in addition to domain knowledge, this derivation requires appropriate metrics to measure similarity between available services and learned preferences. The rest of this paper is organized as follows: Section 2 explains our proposed architecture. Section 3 explains the learning algorithms that were studied to learn consumer preferences. Section 4 studies the different service offering mechanisms. Section 5 contains the similarity metrics used in the experiments. The details of the developed system is analyzed in Section 6. Section 7 provides our experimental setup, test cases, and results. Finally, Section 8 discusses and compares our work with other related work. 8. DISCUSSION We review the recent literature in comparison to our work. Tama et al. [16] propose a new approach based on ontology for negotiation. According to their approach, the negotiation protocols used in e-commerce can be modeled as ontologies. Thus, the agents can perform negotiation protocol by using this shared ontology without the need of being hard coded of negotiation protocol details. While The Sixth Intl. . Joint Conf. Table 5: Comparison of similarity metrics in terms of number of interactions Tama et al. model the negotiation protocol using ontologies, we have instead modeled the service to be negotiated. Further, we have built a system with which negotiation preferences can be learned. Sadri et al. study negotiation in the context of resource allocation [14]. Agents have limited resources and need to require missing resources from other agents. A mechanism which is based on dialogue sequences among agents is proposed as a solution. The mechanism relies on observe-think-action agent cycle. These dialogues include offering resources, resource exchanges and offering alternative resource. Each agent in the system plans its actions to reach a goal state. Contrary to our approach, Sadri et al.'s study is not concerned with learning preferences of each other. Brzostowski and Kowalczyk propose an approach to select an appropriate negotiation partner by investigating previous multi-attribute negotiations [1]. In our approach, we are interested in negotiation the content of the service. After the consumer and producer agree on the service, price-oriented negotiation mechanisms can be used to agree on the price. Time interval affects these agents differently. Compared to Fatima et al. our focus is different. While they study the effect of time on negotiation, our focus is on learning preferences for a successful negotiation. Faratin et al. only use the last offer made by the consumer in calculating the similarity for choosing counter offer. Unlike them, we also take into account the previous requests of the consumer. In their experiments, Faratin et al. assume that the weights for service variables are fixed a priori. On the contrary, we learn these preferences over time. In our future work, we plan to integrate ontology reasoning into the learning algorithm so that hierarchical information can be learned from subsumption hierarchy of relations. Further, by using relationships among features, the producer can discover new knowledge from the existing knowledge.
J-37
Finding Equilibria in Large Sequential Games of Imperfect Information
Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is ˜O(n2 ), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes-over four orders of magnitude more than in the largest poker game solved previously. We discuss several electronic commerce applications for GameShrink. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies.
[ "sequenti game", "sequenti game of imperfect inform", "imperfect inform", "equilibrium", "comput game theori", "game theori", "order game isomorph", "relat order game isomorph abstract transform", "observ action", "order signal space", "nash equilibrium", "gameshrink", "signal tree", "norm framework", "ration behavior", "strategi profil", "autom abstract", "equilibrium find", "comput poker" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "M", "M", "R", "R" ]
Finding Equilibria in Large Sequential Games of Imperfect Information∗ Andrew Gilpin Computer Science Department Carnegie Mellon University Pittsburgh, PA, USA gilpin@cs.cmu.edu Tuomas Sandholm Computer Science Department Carnegie Mellon University Pittsburgh, PA, USA sandholm@cs.cmu.edu ABSTRACT Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is ˜O(n2 ), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes-over four orders of magnitude more than in the largest poker game solved previously. We discuss several electronic commerce applications for GameShrink. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies. Categories and Subject Descriptors: I.2 [Artificial Intelligence], F. [Theory of Computation], J.4 [Social and Behavioral Sciences]: Economics. General Terms: Algorithms, Economics, Theory. 1. INTRODUCTION In environments with more than one agent, an agent``s outcome is generally affected by the actions of the other agent(s). Consequently, the optimal action of one agent can depend on the others. Game theory provides a normative framework for analyzing such strategic situations. In particular, it provides solution concepts that define what rational behavior is in such settings. The most famous and important solution concept is that of Nash equilibrium [36]. It is a strategy profile (one strategy for each agent) in which no agent has incentive to deviate to a different strategy. However, for the concept to be operational, we need algorithmic techniques for finding an equilibrium. Games can be classified as either games of perfect information or imperfect information. Chess and Go are examples of the former, and, until recently, most game playing work has been on games of this type. To compute an optimal strategy in a perfect information game, an agent traverses the game tree and evaluates individual nodes. If the agent is able to traverse the entire game tree, she simply computes an optimal strategy from the bottom-up, using the principle of backward induction.1 In computer science terms, this is done using minimax search (often in conjunction with αβ-pruning to reduce the search tree size and thus enhance speed). Minimax search runs in linear time in the size of the game tree.2 The differentiating feature of games of imperfect information, such as poker, is that they are not fully observable: when it is an agent``s turn to move, she does not have access to all of the information about the world. In such games, the decision of what to do at a point in time cannot generally be optimally made without considering decisions at all other points in time (including ones on other paths of play) because those other decisions affect the probabilities of being at different states at the current point in time. Thus the algorithms for perfect information games do not solve games of imperfect information. For sequential games with imperfect information, one could try to find an equilibrium using the normal (matrix) form, where every contingency plan of the agent is a pure strategy for the agent.3 Unfortunately (even if equivalent strategies 1 This actually yields a solution that satisfies not only the Nash equilibrium solution concept, but a stronger solution concept called subgame perfect Nash equilibrium [45]. 2 This type of algorithm still does not scale to huge trees (such as in chess or Go), but effective game-playing agents can be developed even then by evaluating intermediate nodes using a heuristic evaluation and then treating those nodes as leaves. 3 An -equilibrium in a normal form game with any 160 are replaced by a single strategy [27]) this representation is generally exponential in the size of the game tree [52]. By observing that one needs to consider only sequences of moves rather than pure strategies [41, 46, 22, 52], one arrives at a more compact representation, the sequence form, which is linear in the size of the game tree.4 For 2-player games, there is a polynomial-sized (in the size of the game tree) linear programming formulation (linear complementarity in the non-zero-sum case) based on the sequence form such that strategies for players 1 and 2 correspond to primal and dual variables. Thus, the equilibria of reasonable-sized 2-player games can be computed using this method [52, 24, 25].5 However, this approach still yields enormous (unsolvable) optimization problems for many real-world games, such as poker. 1.1 Our approach In this paper, we take a different approach to tackling the difficult problem of equilibrium computation. Instead of developing an equilibrium-finding method per se, we instead develop a methodology for automatically abstracting games in such a way that any equilibrium in the smaller (abstracted) game corresponds directly to an equilibrium in the original game. Thus, by computing an equilibrium in the smaller game (using any available equilibrium-finding algorithm), we are able to construct an equilibrium in the original game. The motivation is that an equilibrium for the smaller game can be computed drastically faster than for the original game. To this end, we introduce games with ordered signals (Section 2), a broad class of games that has enough structure which we can exploit for abstraction purposes. Instead of operating directly on the game tree (something we found to be technically challenging), we instead introduce the use of information filters (Section 2.1), which coarsen the information each player receives. They are used in our analysis and abstraction algorithm. By operating only in the space of filters, we are able to keep the strategic structure of the game intact, while abstracting out details of the game in a way that is lossless from the perspective of equilibrium finding. We introduce the ordered game isomorphism to describe strategically symmetric situations and the ordered game isomorphic abstraction transformation to take advantange of such symmetries (Section 3). As our main equilibrium result we have the following: constant number of agents can be constructed in quasipolynomial time [31], but finding an exact equilibrium is PPAD-complete even in a 2-player game [8]. The most prevalent algorithm for finding an equilibrium in a 2-agent game is Lemke-Howson [30], but it takes exponentially many steps in the worst case [44]. For a survey of equilibrium computation in 2-player games, see [53]. Recently, equilibriumfinding algorithms that enumerate supports (i.e., sets of pure strategies that are played with positive probability) have been shown efficient on many games [40], and efficient mixed integer programming algorithms that search in the space of supports have been developed [43]. For more than two players, many algorithms have been proposed, but they currently only scale to very small games [19, 34, 40]. 4 There were also early techniques that capitalized in different ways on the fact that in many games the vast majority of pure strategies are not played in equilibrium [54, 23]. 5 Recently this approach was extended to handle computing sequential equilibria [26] as well [35]. Theorem 2 Let Γ be a game with ordered signals, and let F be an information filter for Γ. Let F be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation, and let σ be a Nash equilibrium strategy profile of the induced game ΓF (i.e., the game Γ using the filter F ). If σ is constructed by using the corresponding strategies of σ , then σ is a Nash equilibrium of ΓF . The proof of the theorem uses an equivalent characterization of Nash equilibria: σ is a Nash equilibrium if and only if there exist beliefs μ (players'' beliefs about unknown information) at all points of the game reachable by σ such that σ is sequentially rational (i.e., a best response) given μ, where μ is updated using Bayes'' rule. We can then use the fact that σ is a Nash equilibrium to show that σ is a Nash equilibrium considering only local properties of the game. We also give an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively (Section 4). Its complexity is ˜O(n2 ), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. We present several algorithmic and data structure related speed improvements (Section 4.1), and we demonstrate how a simple modification to our algorithm yields an approximation algorithm (Section 5). 1.2 Electronic commerce applications Sequential games of imperfect information are ubiquitous, for example in negotiation and in auctions. Often aspects of a player``s knowledge are not pertinent for deciding what action the player should take at a given point in the game. On the trivial end, some aspects of a player``s knowledge are never pertinent (e.g., whether it is raining or not has no bearing on the bidding strategy in an art auction), and such aspects can be completely left out of the model specification. However, some aspects can be pertinent in certain states of the game while they are not pertinent in other states, and thus cannot be left out of the model completely. Furthermore, it may be highly non-obvious which aspects are pertinent in which states of the game. Our algorithm automatically discovers which aspects are irrelevant in different states, and eliminates those aspects of the game, resulting in a more compact, equivalent game representation. One broad application area that has this property is sequential negotiation (potentially over multiple issues). Another broad application area is sequential auctions (potentially over multiple goods). For example, in those states of a 1-object auction where bidder A can infer that his valuation is greater than that of bidder B, bidder A can ignore all his other information about B``s signals, although that information would be relevant for inferring B``s exact valuation. Furthermore, in some states of the auction, a bidder might not care which exact other bidders have which valuations, but cares about which valuations are held by the other bidders in aggregate (ignoring their identities). Many open-cry sequential auction and negotiation mechanisms fall within the game model studied in this paper (specified in detail later), as do certain other games in electronic commerce, such as sequences of take-it-or-leave-it offers [42]. Our techniques are in no way specific to an application. The main experiment that we present in this paper is on 161 a recreational game. We chose a particular poker game as the benchmark problem because it yields an extremely complicated and enormous game tree, it is a game of imperfect information, it is fully specified as a game (and the data is available), and it has been posted as a challenge problem by others [47] (to our knowledge no such challenge problem instances have been proposed for electronic commerce applications that require solving sequential games). 1.3 Rhode Island Hold``em poker Poker is an enormously popular card game played around the world. The 2005 World Series of Poker had over $103 million dollars in total prize money, including $56 million for the main event. Increasingly, poker players compete in online casinos, and television stations regularly broadcast poker tournaments. Poker has been identified as an important research area in AI due to the uncertainty stemming from opponents'' cards, opponents'' future actions, and chance moves, among other reasons [5]. Almost since the field``s founding, game theory has been used to analyze different aspects of poker [28; 37; 3; 51, pp. 186-219]. However, this work was limited to tiny games that could be solved by hand. More recently, AI researchers have been applying the computational power of modern hardware to computing game theory-based strategies for larger games. Koller and Pfeffer determined solutions to poker games with up to 140,000 nodes using the sequence form and linear programming [25]. Large-scale approximations have been developed [4], but those methods do not provide any guarantees about the performance of the computed strategies. Furthermore, the approximations were designed manually by a human expert. Our approach yields an automated abstraction mechanism along with theoretical guarantees on the strategies'' performance. Rhode Island Hold``em was invented as a testbed for computational game playing [47]. It was designed so that it was similar in style to Texas Hold``em, yet not so large that devising reasonably intelligent strategies would be impossible. (The rules of Rhode Island Hold``em, as well as a discussion of how Rhode Island Hold``em can be modeled as a game with ordered signals, that is, it fits in our model, is available in an extended version of this paper [13].) We applied the techniques developed in this paper to find an exact (minimax) solution to Rhode Island Hold``em, which has a game tree exceeding 3.1 billion nodes. Applying the sequence form to Rhode Island Hold``em directly without abstraction yields a linear program with 91,224,226 rows, and the same number of columns. This is much too large for (current) linear programming algorithms to handle. We used our GameShrink algorithm to reduce this with lossless abstraction, and it yielded a linear program with 1,237,238 rows and columns-with 50,428,638 non-zero coefficients. We then applied iterated elimination of dominated strategies, which further reduced this to 1,190,443 rows and 1,181,084 columns. (Applying iterated elimination of dominated strategies without GameShrink yielded 89,471,986 rows and 89,121,538 columns, which still would have been prohibitively large to solve.) GameShrink required less than one second to perform the shrinking (i.e., to compute all of the ordered game isomorphic abstraction transformations). Using a 1.65GHz IBM eServer p5 570 with 64 gigabytes of RAM (the linear program solver actually needed 25 gigabytes), we solved it in 7 days and 17 hours using the interior-point barrier method of CPLEX version 9.1.2. We recently demonstrated our optimal Rhode Island Hold``em poker player at the AAAI-05 conference [14], and it is available for play on-line at http://www.cs.cmu.edu/ ~gilpin/gsi. html. While others have worked on computer programs for playing Rhode Island Hold``em [47], no optimal strategy has been found before. This is the largest poker game solved to date by over four orders of magnitude. 2. GAMES WITH ORDERED SIGNALS We work with a slightly restricted class of games, as compared to the full generality of the extensive form. This class, which we call games with ordered signals, is highly structured, but still general enough to capture a wide range of strategic situations. A game with ordered signals consists of a finite number of rounds. Within a round, the players play a game on a directed tree (the tree can be different in different rounds). The only uncertainty players face stems from private signals the other players have received and from the unknown future signals. In other words, players observe each others'' actions, but potentially not nature``s actions. In each round, there can be public signals (announced to all players) and private signals (confidentially communicated to individual players). For simplicity, we assume-as is the case in most recreational games-that within each round, the number of private signals received is the same across players (this could quite likely be relaxed). We also assume that the legal actions that a player has are independent of the signals received. For example, in poker, the legal betting actions are independent of the cards received. Finally, the strongest assumption is that there is a partial ordering over sets of signals, and the payoffs are increasing (not necessarily strictly) in these signals. For example, in poker, this partial ordering corresponds exactly to the ranking of card hands. Definition 1. A game with ordered signals is a tuple Γ = I, G, L, Θ, κ, γ, p, , ω, u where: 1. I = {1, ... , n} is a finite set of players. 2. G = G1 , ... , Gr , Gj = ` V j , Ej ´ , is a finite collection of finite directed trees with nodes V j and edges Ej . Let Zj denote the leaf nodes of Gj and let Nj (v) denote the outgoing neighbors of v ∈ V j . Gj is the stage game for round j. 3. L = L1 , ... , Lr , Lj : V j \ Zj → I indicates which player acts (chooses an outgoing edge) at each internal node in round j. 4. Θ is a finite set of signals. 5. κ = κ1 , ... , κr and γ = γ1 , ... , γr are vectors of nonnegative integers, where κj and γj denote the number of public and private signals (per player), respectively, revealed in round j. Each signal θ ∈ Θ may only be revealed once, and in each round every player receives the same number of private signals, so we require Pr j=1 κj + nγj ≤ |Θ|. The public information revealed in round j is αj ∈ Θκj and the public information revealed in all rounds up through round j is ˜αj = ` α1 , ... , αj ´ . The private information revealed to player i ∈ I in round j is βj i ∈ Θγj and the private information revaled to player i ∈ I in all rounds up through round j is ˜βj i = ` β1 i , ... , βj i ´ . We 162 also write ˜βj = ˜βj 1, ... , ˜βj n to represent all private information up through round j, and ˜β j i , ˜βj −i = ˜βj 1, ... , ˜βj i−1, ˜β j i , ˜βj i+1, ... , ˜βj n is ˜βj with ˜βj i replaced with ˜β j i . The total information revealed up through round j, ˜αj , ˜βj , is said to be legal if no signals are repeated. 6. p is a probability distribution over Θ, with p(θ) > 0 for all θ ∈ Θ. Signals are drawn from Θ according to p without replacement, so if X is the set of signals already revealed, then p(x | X) = ( p(x)P y /∈X p(y) if x /∈ X 0 if x ∈ X. 7. is a partial ordering of subsets of Θ and is defined for at least those pairs required by u. 8. ω : rS j=1 Zj → {over, continue} is a mapping of terminal nodes within a stage game to one of two values: over, in which case the game ends, or continue, in which case the game continues to the next round. Clearly, we require ω(z) = over for all z ∈ Zr . Note that ω is independent of the signals. Let ωj over = {z ∈ Zj | ω(z) = over} and ωj cont = {z ∈ Zj | ω(z) = continue}. 9. u = (u1 , ... , ur ), uj : j−1 k=1 ωk cont × ωj over × j k=1 Θκk × n i=1 j k=1 Θγk → Rn is a utility function such that for every j, 1 ≤ j ≤ r, for every i ∈ I, and for every ˜z ∈ j−1 k=1 ωk cont × ωj over, at least one of the following two conditions holds: (a) Utility is signal independent: uj i (˜z, ϑ) = uj i (˜z, ϑ ) for all legal ϑ, ϑ ∈ j k=1 Θκk × n i=1 j k=1 Θγk . (b) is defined for all legal signals (˜αj , ˜βj i ), (˜αj , ˜β j i ) through round j and a player``s utility is increasing in her private signals, everything else equal: ˜αj , ˜βj i ˜αj , ˜β j i =⇒ ui ˜z, ˜αj , ˜βj i , ˜βj −i ≥ ui ˜z, ˜αj , ˜β j i , ˜βj −i . We will use the term game with ordered signals and the term ordered game interchangeably. 2.1 Information filters In this subsection, we define an information filter for ordered games. Instead of completely revealing a signal (either public or private) to a player, the signal first passes through this filter, which outputs a coarsened signal to the player. By varying the filter applied to a game, we are able to obtain a wide variety of games while keeping the underlying action space of the game intact. We will use this when designing our abstraction techniques. Formally, an information filter is as follows. Definition 2. Let Γ = I, G, L, Θ, κ, γ, p, , ω, u be an ordered game. Let Sj ⊆ j k=1 Θκk × j k=1 Θγk be the set of legal signals (i.e., no repeated signals) for one player through round j. An information filter for Γ is a collection F = F1 , ... , Fr where each Fj is a function Fj : Sj → 2Sj such that each of the following conditions hold: 1. (Truthfulness) (˜αj , ˜βj i ) ∈ Fj (˜αj , ˜βj i ) for all legal (˜αj , ˜βj i ). 2. (Independence) The range of Fj is a partition of Sj . 3. (Information preservation) If two values of a signal are distinguishable in round k, then they are distinguishable fpr each round j > k. Let mj = Pj l=1 κl +γl . We require that for all legal (θ1, ... , θmk , ... , θmj ) ⊆ Θ and (θ1, ... , θmk , ... , θmj ) ⊆ Θ: (θ1, ... , θmk ) /∈ Fk (θ1, ... , θmk ) =⇒ (θ1, ... , θmk , ... , θmj ) /∈ Fj (θ1, ... , θmk , ... , θmj ). A game with ordered signals Γ and an information filter F for Γ defines a new game ΓF . We refer to such games as filtered ordered games. We are left with the original game if we use the identity filter Fj ˜αj , ˜βj i = n ˜αj , ˜βj i o . We have the following simple (but important) result: Proposition 1. A filtered ordered game is an extensive form game satisfying perfect recall. A simple proof proceeds by constructing an extensive form game directly from the ordered game, and showing that it satisfies perfect recall. In determining the payoffs in a game with filtered signals, we take the average over all real signals in the filtered class, weighted by the probability of each real signal occurring. 2.2 Strategies and Nash equilibrium We are now ready to define behavior strategies in the context of filtered ordered games. Definition 3. A behavior strategy for player i in round j of Γ = I, G, L, Θ, κ, γ, p, , ω, u with information filter F is a probability distribution over possible actions, and is defined for each player i, each round j, and each v ∈ V j \Zj for Lj (v) = i: σj i,v : j−1 k=1 ωk cont×Range Fj → Δ n w ∈ V j | (v, w) ∈ Ej o . (Δ(X) is the set of probability distributions over a finite set X.) A behavior strategy for player i in round j is σj i = (σj i,v1 , ... , σj i,vm ) for each vk ∈ V j \ Zj where Lj (vk) = i. A behavior strategy for player i in Γ is σi = ` σ1 i , ... , σr i ´ . A strategy profile is σ = (σ1, ... , σn). A strategy profile with σi replaced by σi is (σi, σ−i) = (σ1, ... , σi−1, σi, σi+1, ... , σn). By an abuse of notation, we will say player i receives an expected payoff of ui(σ) when all players are playing the strategy profile σ. Strategy σi is said to be player i``s best response to σ−i if for all other strategies σi for player i we have ui(σi, σ−i) ≥ ui(σi, σ−i). σ is a Nash equilibrium if, for every player i, σi is a best response for σ−i. A Nash equilibrium always exists in finite extensive form games [36], and one exists in behavior strategies for games with perfect recall [29]. Using these observations, we have the following corollary to Proposition 1: 163 Corollary 1. For any filtered ordered game, a Nash equilibrium exists in behavior strateges. 3. EQUILIBRIUM-PRESERVING ABSTRACTIONS In this section, we present our main technique for reducing the size of games. We begin by defining a filtered signal tree which represents all of the chance moves in the game. The bold edges (i.e. the first two levels of the tree) in the game trees in Figure 1 correspond to the filtered signal trees in each game. Definition 4. Associated with every ordered game Γ = I, G, L, Θ, κ, γ, p, , ω, u and information filter F is a filtered signal tree, a directed tree in which each node corresponds to some revealed (filtered) signals and edges correspond to revealing specific (filtered) signals. The nodes in the filtered signal tree represent the set of all possible revealed filtered signals (public and private) at some point in time. The filtered public signals revealed in round j correspond to the nodes in the κj levels beginning at level Pj−1 k=1 ` κk + nγk ´ and the private signals revealed in round j correspond to the nodes in the nγj levels beginning at level Pj k=1 κk + Pj−1 k=1 nγk . We denote children of a node x as N(x). In addition, we associate weights with the edges corresponding to the probability of the particular edge being chosen given that its parent was reached. In many games, there are certain situations in the game that can be thought of as being strategically equivalent to other situations in the game. By melding these situations together, it is possible to arrive at a strategically equivalent smaller game. The next two definitions formalize this notion via the introduction of the ordered game isomorphic relation and the ordered game isomorphic abstraction transformation. Definition 5. Two subtrees beginning at internal nodes x and y of a filtered signal tree are ordered game isomorphic if x and y have the same parent and there is a bijection f : N(x) → N(y), such that for w ∈ N(x) and v ∈ N(y), v = f(w) implies the weights on the edges (x, w) and (y, v) are the same and the subtrees beginning at w and v are ordered game isomorphic. Two leaves (corresponding to filtered signals ϑ and ϑ up through round r) are ordered game isomorphic if for all ˜z ∈ r−1 j=1 ωj cont × ωr over, ur (˜z, ϑ) = ur (˜z, ϑ ). Definition 6. Let Γ = I, G, L, Θ, κ, γ, p, , ω, u be an ordered game and let F be an information filter for Γ. Let ϑ and ϑ be two nodes where the subtrees in the induced filtered signal tree corresponding to the nodes ϑ and ϑ are ordered game isomorphic, and ϑ and ϑ are at either levelPj−1 k=1 ` κk + nγk ´ or Pj k=1 κk + Pj−1 k=1 nγk for some round j. The ordered game isomorphic abstraction transformation is given by creating a new information filter F : F j ˜αj , ˜βj i = 8 < : Fj ˜αj , ˜βj i if ˜αj , ˜βj i /∈ ϑ ∪ ϑ ϑ ∪ ϑ if ˜αj , ˜βj i ∈ ϑ ∪ ϑ . Figure 1 shows the ordered game isomorphic abstraction transformation applied twice to a tiny poker game. Theorem 2, our main equilibrium result, shows how the ordered game isomorphic abstraction transformation can be used to compute equilibria faster. Theorem 2. Let Γ = I, G, L, Θ, κ, γ, p, , ω, u be an ordered game and F be an information filter for Γ. Let F be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation. Let σ be a Nash equilibrium of the induced game ΓF . If we take σj i,v ˜z, Fj ˜αj , ˜βj i = σ j i,v ˜z, F j ˜αj , ˜βj i , σ is a Nash equilibrium of ΓF . Proof. For an extensive form game, a belief system μ assigns a probability to every decision node x such thatP x∈h μ(x) = 1 for every information set h. A strategy profile σ is sequentially rational at h given belief system μ if ui(σi, σ−i | h, μ) ≥ ui(τi, σ−i | h, μ) for all other strategies τi, where i is the player who controls h. A basic result [33, Proposition 9.C.1] characterizing Nash equilibria dictates that σ is a Nash equilibrium if and only if there is a belief system μ such that for every information set h with Pr(h | σ) > 0, the following two conditions hold: (C1) σ is sequentially rational at h given μ; and (C2) μ(x) = Pr(x | σ) Pr(h | σ) for all x ∈ h. Since σ is a Nash equilibrium of Γ , there exists such a belief system μ for ΓF . Using μ , we will construct a belief system μ for Γ and show that conditions C1 and C2 hold, thus supporting σ as a Nash equilibrium. Fix some player i ∈ I. Each of i``s information sets in some round j corresponds to filtered signals Fj ˜α∗j , ˜β∗j i , history in the first j − 1 rounds (z1, ... , zj−1) ∈ j−1 k=1 ωk cont, and history so far in round j, v ∈ V j \ Zj . Let ˜z = (z1, ... , zj−1, v) represent all of the player actions leading to this information set. Thus, we can uniquely specify this information set using the information Fj ˜α∗j , ˜β∗j i , ˜z . Each node in an information set corresponds to the possible private signals the other players have received. Denote by ˜β some legal (Fj (˜αj , ˜βj 1), ... , Fj (˜αj , ˜βj i−1), Fj (˜αj , ˜βj i+1), ... , Fj (˜αj , ˜βj n)). In other words, there exists (˜αj , ˜βj 1, ... , ˜βj n) such that (˜αj , ˜βj i ) ∈ Fj (˜α∗j , ˜β∗j i ), (˜αj , ˜βj k) ∈ Fj (˜αj , ˜βj k) for k = i, and no signals are repeated. Using such a set of signals (˜αj , ˜βj 1, ... , ˜βj n), let ˆβ denote (F j (˜αj , ˜βj 1), ... , F j (˜αj , ˜βj i−1), F j (˜αj , ˜βj i+1), ... , F j (˜αj , ˜βj n). (We will abuse notation and write F j −i ˆβ = ˆβ .) We can now compute μ directly from μ : μ ˆβ | Fj ˜αj , ˜βj i , ˜z = 8 >>>>>>< >>>>>>: μ ˆβ | F j ˜αj , ˜βj i , ˜z if Fj ˜αj , ˜βj i = F j ˜αj , ˜βj i or ˆβ = ˆβ p∗ μ ˆβ | F j ˜αj , ˜βj i , ˜z if Fj ˜αj , ˜βj i = F j ˜αj , ˜βj i and ˆβ = ˆβ 164 J1 J2 J2 K1 K1 K2 K2 c b C B F B f b c b C B F B f b c b C f b B BF c b C f b B BF c b C B F B f b c b C BF B f b c b C f b B BF c b C f b B BF c b C f b B BF c b C f b B BF c b C B F B f b c b C B F B f b 0 0 0-1 -1 -1 -1 -1 -1 -1 -1-1 -1 -1 -1 -1 -1 -1 -1 -1 -1-1 -1 -1 -1 -1 -10 0 0 0 0 0 0 0 0 -1 -2 -2 -1 -2 -2 -1 -2 -2 -1 -2 -2 1 2 2 1 2 2 1 2 2 1 2 2 J1 K1 K2 J1 J2 K2 J1 J2 K1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 {{J1}, {J2}, {K1}, {K2}} {{J1,J2}, {K1}, {K2}} c b C BF B f b c b C f b B BF c b C B F B f b J1,J2 K1 K2 1 1 c b C f b B BF c b C BF B f b c b C BF B f b c b C B F B f b J1,J2 K1 K2 1 1 1 1 J1,J2 K2 J1,J2 K1 0 0 0-1 -1 -1 -1 -1 -1 -1 -2 -2 -1 -2 -2 2 2 2 2 2 2 -1 -1-1 -1 0 0 0 1 2 2 -1 -1-1 -1 0 0 0 1 2 2 c b C B F B f b -1 -10 0 0 c b B F B f b -1 -1-1 -2 -2 c b C BF B f b 0 0 0-1 -1 c b C BF B f b J1,J2 J1,J2 J1,J2K1,K2 K1,K2 K1,K2 -1 -1 1 2 2 2 2 2 2 {{J1,J2}, {K1,K2}} 1 1 1 1 1/4 1/4 1/4 1/4 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3 1/4 1/41/2 1/3 1/3 1/3 1/32/3 1/32/3 1/2 1/2 1/3 2/3 2/3 1/3 Figure 1: GameShrink applied to a tiny two-person four-card (two Jacks and two Kings) poker game. Next to each game tree is the range of the information filter F. Dotted lines denote information sets, which are labeled by the controlling player. Open circles are chance nodes with the indicated transition probabilities. The root node is the chance node for player 1``s card, and the next level is for player 2``s card. The payment from player 2 to player 1 is given below each leaf. In this example, the algorithm reduces the game tree from 53 nodes to 19 nodes. where p∗ = Pr(ˆβ | F j (˜αj , ˜β j i )) Pr(ˆβ | F j (˜αj , ˜β j i )) . The following three claims show that μ as calculated above supports σ as a Nash equilibrium. Claim 1. μ is a valid belief system for ΓF . Claim 2. For all information sets h with Pr(h | σ) > 0, μ(x) = Pr(x | σ) Pr(h | σ) for all x ∈ h. Claim 3. For all information sets h with Pr(h | σ) > 0, σ is sequentially rational at h given μ. The proofs of Claims 1-3 are in an extended version of this paper [13]. By Claims 1 and 2, we know that condition C2 holds. By Claim 3, we know that condition C1 holds. Thus, σ is a Nash equilibrium. 3.1 Nontriviality of generalizing beyond this model Our model does not capture general sequential games of imperfect information because it is restricted in two ways (as discussed above): 1) there is a special structure connecting the player actions and the chance actions (for one, the players are assumed to observe each others'' actions, but nature``s actions might not be publicly observable), and 2) there is a common ordering of signals. In this subsection we show that removing either of these conditions can make our technique invalid. First, we demonstrate a failure when removing the first assumption. Consider the game in Figure 2.6 Nodes a and b are in the same information set, have the same parent (chance) node, have isomorphic subtrees with the same payoffs, and nodes c and d also have similar structural properties. By merging the subtrees beginning at a and b, we get the game on the right in Figure 2. In this game, player 1``s only Nash equilibrium strategy is to play left. But in the original game, player 1 knows that node c will never be reached, and so should play right in that information set. 1/4 1/4 1/4 1/4 2 2 2 1 1 1 2 1 2 3 0 3 0 -10 10 1/2 1/4 1/4 2 2 2 1 1 2 3 0 3 0 a b 2 2 2 10-10 c d Figure 2: Example illustrating difficulty in developing a theory of equilibrium-preserving abstractions for general extensive form games. Removing the second assumption (that the utility functions are based on a common ordering of signals) can also cause failure. Consider a simple three-card game with a deck containing two Jacks (J1 and J2) and a King (K), where player 1``s utility function is based on the ordering 6 We thank Albert Xin Jiang for providing this example. 165 K J1 ∼ J2 but player 2``s utility function is based on the ordering J2 K J1. It is easy to check that in the abstracted game (where Player 1 treats J1 and J2 as being equivalent) the Nash equilibrium does not correspond to a Nash equilibrium in the original game.7 4. GAMESHRINK: AN EFFICIENT ALGORITHM FOR COMPUTING ORDERED GAME ISOMORPHIC ABSTRACTION TRANSFORMATIONS This section presents an algorithm, GameShrink, for conducting the abstractions. It only needs to analyze the signal tree discussed above, rather than the entire game tree. We first present a subroutine that GameShrink uses. It is a dynamic program for computing the ordered game isomorphic relation. Again, it operates on the signal tree. Algorithm 1. OrderedGameIsomorphic? (Γ, ϑ, ϑ ) 1. If ϑ and ϑ have different parents, then return false. 2. If ϑ and ϑ are both leaves of the signal tree: (a) If ur (ϑ | ˜z) = ur (ϑ | ˜z) for all ˜z ∈ r−1 j=1 ωj cont × ωr over, then return true. (b) Otherwise, return false. 3. Create a bipartite graph Gϑ,ϑ = (V1, V2, E) with V1 = N(ϑ) and V2 = N(ϑ ). 4. For each v1 ∈ V1 and v2 ∈ V2: If OrderedGameIsomorphic? (Γ, v1, v2) Create edge (v1, v2) 5. Return true if Gϑ,ϑ has a perfect matching; otherwise, return false. By evaluating this dynamic program from bottom to top, Algorithm 1 determines, in time polynomial in the size of the signal tree, whether or not any pair of equal depth nodes x and y are ordered game isomorphic. We can further speed up this computation by only examining nodes with the same parent, since we know (from step 1) that no nodes with different parents are ordered game isomorphic. The test in step 2(a) can be computed in O(1) time by consulting the relation from the specification of the game. Each call to OrderedGameIsomorphic? performs at most one perfect matching computation on a bipartite graph with O(|Θ|) nodes and O(|Θ|2 ) edges (recall that Θ is the set of signals). Using the Ford-Fulkerson algorithm [12] for finding a maximal matching, this takes O(|Θ|3 ) time. Let S be the maximum number of signals possibly revealed in the game (e.g., in Rhode Island Hold``em, S = 4 because each of the two players has one card in the hand plus there are two cards on the table). The number of nodes, n, in the signal tree is O(|Θ|S ). The dynamic program visits each node in the signal tree, with each visit requiring O(|Θ|2 ) calls to the OrderedGameIsomorphic? routine. So, it takes O(|Θ|S |Θ|3 |Θ|2 ) = O(|Θ|S+5 ) time to compute the entire ordered game isomorphic relation. While this is exponential in the number of revealed signals, we now show that it is polynomial in the size of the signal tree-and thus polynomial in the size of the game tree 7 We thank an anonymous person for this example. because the signal tree is smaller than the game tree. The number of nodes in the signal tree is n = 1 + SX i=1 iY j=1 (|Θ| − j + 1) (Each term in the summation corresponds to the number of nodes at a specific depth of the tree.) The number of leaves is SY j=1 (|Θ| − j + 1) = |Θ| S ! S! which is a lower bound on the number of nodes. For large |Θ| we can use the relation `n k ´ ∼ nk k! to get |Θ| S ! S! ∼ „ |Θ|S S! `` S! = |Θ|S and thus the number of leaves in the signal tree is Ω(|Θ|S ). Thus, O(|Θ|S+5 ) = O(n|Θ|5 ), which proves that we can indeed compute the ordered game isomorphic relation in time polynomial in the number of nodes, n, of the signal tree. The algorithm often runs in sublinear time (and space) in the size of the game tree because the signal tree is significantly smaller than the game tree in most nontrivial games. (Note that the input to the algorithm is not an explicit game tree, but a specification of the rules, so the algorithm does not need to read in the game tree.) See Figure 1. In general, if an ordered game has r rounds, and each round``s stage game has at least b nonterminal leaves, then the size of the signal tree is at most 1 br of the size of the game tree. For example, in Rhode Island Hold``em, the game tree has 3.1 billion nodes while the signal tree only has 6,632,705. Given the OrderedGameIsomorphic? routine for determining ordered game isomorphisms in an ordered game, we are ready to present the main algorithm, GameShrink. Algorithm 2. GameShrink (Γ) 1. Initialize F to be the identity filter for Γ. 2. For j from 1 to r: For each pair of sibling nodes ϑ, ϑ at either levelPj−1 k=1 ` κk + nγk ´ or Pj k=1 κk + Pj−1 k=1 nγk in the filtered (according to F) signal tree: If OrderedGameIsomorphic? (Γ, ϑ, ϑ ), then Fj (ϑ) ← Fj (ϑ ) ← Fj (ϑ) ∪ Fj (ϑ ). 3. Output F. Given as input an ordered game Γ, GameShrink applies the shrinking ideas presented above as aggressively as possible. Once it finishes, there are no contractible nodes (since it compares every pair of nodes at each level of the signal tree), and it outputs the corresponding information filter F. The correctness of GameShrink follows by a repeated application of Theorem 2. Thus, we have the following result: Theorem 3. GameShrink finds all ordered game isomorphisms and applies the associated ordered game isomorphic abstraction transformations. Furthermore, for any Nash equilibrium, σ , of the abstracted game, the strategy profile constructed for the original game from σ is a Nash equilibrium. The dominating factor in the run time of GameShrink is in the rth iteration of the main for-loop. There are at most 166 `|Θ| S ´ S! nodes at this level, where we again take S to be the maximum number of signals possibly revealed in the game. Thus, the inner for-loop executes O „`|Θ| S ´ S! 2 `` times. As discussed in the next subsection, we use a union-find data structure to represent the information filter F. Each iteration of the inner for-loop possibly performs a union operation on the data structure; performing M operations on a union-find data structure containing N elements takes O(α(M, N)) amortized time per operation, where α(M, N) is the inverse Ackermann``s function [1, 49] (which grows extremely slowly). Thus, the total time for GameShrink is O „`|Θ| S ´ S! 2 α „`|Θ| S ´ S! 2 , |Θ|S ```` . By the inequality `n k ´ ≤ nk k! , this is O ` (|Θ|S )2 α ` (|Θ|S )2 , |Θ|S ´´ . Again, although this is exponential in S, it is ˜O(n2 ), where n is the number of nodes in the signal tree. Furthermore, GameShrink tends to actually run in sublinear time and space in the size of the game tree because the signal tree is significantly smaller than the game tree in most nontrivial games, as discussed above. 4.1 Efficiency enhancements We designed several speed enhancement techniques for GameShrink, and all of them are incorporated into our implementation. One technique is the use of the union-find data structure for storing the information filter F. This data structure uses time almost linear in the number of operations [49]. Initially each node in the signalling tree is its own set (this corresponds to the identity information filter); when two nodes are contracted they are joined into a new set. Upon termination, the filtered signals for the abstracted game correspond exactly to the disjoint sets in the data structure. This is an efficient method of recording contractions within the game tree, and the memory requirements are only linear in the size of the signal tree. Determining whether two nodes are ordered game isomorphic requires us to determine if a bipartite graph has a perfect matching. We can eliminate some of these computations by using easy-to-check necessary conditions for the ordered game isomorphic relation to hold. One such condition is to check that the nodes have the same number of chances as being ranked (according to ) higher than, lower than, and the same as the opponents. We can precompute these frequencies for every game tree node. This substantially speeds up GameShrink, and we can leverage this database across multiple runs of the algorithm (for example, when trying different abstraction levels; see next section). The indices for this database depend on the private and public signals, but not the order in which they were revealed, and thus two nodes may have the same corresponding database entry. This makes the database significantly more compact. (For example in Texas Hold``em, the database is reduced by a factor `50 3 ´`47 1 ´`46 1 ´ / `50 5 ´ = 20.) We store the histograms in a 2-dimensional database. The first dimension is indexed by the private signals, the second by the public signals. The problem of computing the index in (either) one of the dimensions is exactly the problem of computing a bijection between all subsets of size r from a set of size n and integers in ˆ 0, ... , `n r ´ − 1 ˜ . We efficiently compute this using the subsets'' colexicographical ordering [6]. Let {c1, ... , cr}, ci ∈ {0, ... , n − 1}, denote the r signals and assume that ci < ci+1. We compute a unique index for this set of signals as follows: index(c1, ... , cr) = Pr i=1 `ci i ´ . 5. APPROXIMATION METHODS Some games are too large to compute an exact equilibrium, even after using the presented abstraction technique. This section discusses general techniques for computing approximately optimal strategy profiles. For a two-player game, we can always evaluate the worst-case performance of a strategy, thus providing some objective evaluation of the strength of the strategy. To illustrate this, suppose we know player 2``s planned strategy for some game. We can then fix the probabilities of player 2``s actions in the game tree as if they were chance moves. Then player 1 is faced with a single-agent decision problem, which can be solved bottomup, maximizing expected payoff at every node. Thus, we can objectively determine the expected worst-case performance of player 2``s strategy. This will be most useful when we want to evaluate how well a given strategy performs when we know that it is not an equilibrium strategy. (A variation of this technique may also be applied in n-person games where only one player``s strategies are held fixed.) This technique provides ex post guarantees about the worst-case performance of a strategy, and can be used independently of the method that is used to compute the strategies. 5.1 State-space approximations By slightly modifying GameShrink, we can obtain an algorithm that yields even smaller game trees, at the expense of losing the equilibrium guarantees of Theorem 2. Instead of requiring the payoffs at terminal nodes to match exactly, we can instead compute a penalty that increases as the difference in utility between two nodes increases. There are many ways in which the penalty function could be defined and implemented. One possibility is to create edge weights in the bipartite graphs used in Algorithm 1, and then instead of requiring perfect matchings in the unweighted graph we would instead require perfect matchings with low cost (i.e., only consider two nodes to be ordered game isomorphic if the corresponding bipartite graph has a perfect matching with cost below some threshold). Thus, with this threshold as a parameter, we have a knob to turn that in one extreme (threshold = 0) yields an optimal abstraction and in the other extreme (threshold = ∞) yields a highly abstracted game (this would in effect restrict players to ignoring all signals, but still observing actions). This knob also begets an anytime algorithm. One can solve increasingly less abstracted versions of the game, and evaluate the quality of the solution at every iteration using the ex post method discussed above. 5.2 Algorithmic approximations In the case of two-player zero-sum games, the equilibrium computation can be modeled as a linear program (LP), which can in turn be solved using the simplex method. This approach has inherent features which we can leverage into desirable properties in the context of solving games. In the LP, primal solutions correspond to strategies of player 2, and dual solutions correspond to strategies of player 1. There are two versions of the simplex method: the primal simplex and the dual simplex. The primal simplex maintains primal feasibility and proceeds by finding better and better primal solutions until the dual solution vector is feasible, 167 at which point optimality has been reached. Analogously, the dual simplex maintains dual feasibility and proceeds by finding increasingly better dual solutions until the primal solution vector is feasible. (The dual simplex method can be thought of as running the primal simplex method on the dual problem.) Thus, the primal and dual simplex methods serve as anytime algorithms (for a given abstraction) for players 2 and 1, respectively. At any point in time, they can output the best strategies found so far. Also, for any feasible solution to the LP, we can get bounds on the quality of the strategies by examining the primal and dual solutions. (When using the primal simplex method, dual solutions may be read off of the LP tableau.) Every feasible solution of the dual yields an upper bound on the optimal value of the primal, and vice versa [9, p. 57]. Thus, without requiring further computation, we get lower bounds on the expected utility of each agent``s strategy against that agent``s worst-case opponent. One problem with the simplex method is that it is not a primal-dual algorithm, that is, it does not maintain both primal and dual feasibility throughout its execution. (In fact, it only obtains primal and dual feasibility at the very end of execution.) In contrast, there are interior-point methods for linear programming that maintain primal and dual feasibility throughout the execution. For example, many interiorpoint path-following algorithms have this property [55, Ch. 5]. We observe that running such a linear programming method yields a method for finding -equilibria (i.e., strategy profiles in which no agent can increase her expected utility by more than by deviating). A threshold on can also be used as a termination criterion for using the method as an anytime algorithm. Furthermore, interior-point methods in this class have polynomial-time worst-case run time, as opposed to the simplex algorithm, which takes exponentially many steps in the worst case. 6. RELATED RESEARCH Functions that transform extensive form games have been introduced [50, 11]. In contrast to our work, those approaches were not for making the game smaller and easier to solve. The main result is that a game can be derived from another by a sequence of those transformations if and only if the games have the same pure reduced normal form. The pure reduced normal form is the extensive form game represented as a game in normal form where duplicates of pure strategies (i.e., ones with identical payoffs) are removed and players essentially select equivalence classes of strategies [27]. An extension to that work shows a similar result, but for slightly different transformations and mixed reduced normal form games [21]. Modern treatments of this prior work on game transformations exist [38, Ch. 6], [10]. The recent notion of weak isomorphism in extensive form games [7] is related to our notion of restricted game isomorphism. The motivation of that work was to justify solution concepts by arguing that they are invariant with respect to isomorphic transformations. Indeed, the author shows, among other things, that many solution concepts, including Nash, perfect, subgame perfect, and sequential equilibrium, are invariant with respect to weak isomorphisms. However, that definition requires that the games to be tested for weak isomorphism are of the same size. Our focus is totally different: we find strategically equivalent smaller games. Also, their paper does not provide algorithms. Abstraction techniques have been used in artificial intelligence research before. In contrast to our work, most (but not all) research involving abstraction has been for singleagent problems (e.g. [20, 32]). Furthermore, the use of abstraction typically leads to sub-optimal solutions, unlike the techniques presented in this paper, which yield optimal solutions. A notable exception is the use of abstraction to compute optimal strategies for the game of Sprouts [2]. However, a significant difference to our work is that Sprouts is a game of perfect information. One of the first pieces of research to use abstraction in multi-agent settings was the development of partition search, which is the algorithm behind GIB, the world``s first expertlevel computer bridge player [17, 18]. In contrast to other game tree search algorithms which store a particular game position at each node of the search tree, partition search stores groups of positions that are similar. (Typically, the similarity of two game positions is computed by ignoring the less important components of each game position and then checking whether the abstracted positions are similar-in some domain-specific expert-defined sense-to each other.) Partition search can lead to substantial speed improvements over α-β-search. However, it is not game theory-based (it does not consider information sets in the game tree), and thus does not solve for the equilibrium of a game of imperfect information, such as poker.8 Another difference is that the abstraction is defined by an expert human while our abstractions are determined automatically. There has been some research on the use of abstraction for imperfect information games. Most notably, Billings et al [4] describe a manually constructed abstraction for Texas Hold``em poker, and include promising results against expert players. However, this approach has significant drawbacks. First, it is highly specialized for Texas Hold``em. Second, a large amount of expert knowledge and effort was used in constructing the abstraction. Third, the abstraction does not preserve equilibrium: even if applied to a smaller game, it might not yield a game-theoretic equilibrium. Promising ideas for abstraction in the context of general extensive form games have been described in an extended abstract [39], but to our knowledge, have not been fully developed. 7. CONCLUSIONS AND DISCUSSION We introduced the ordered game isomorphic abstraction transformation and gave an algorithm, GameShrink, for abstracting the game using the isomorphism exhaustively. We proved that in games with ordered signals, any Nash equilibrium in the smaller abstracted game maps directly to a Nash equilibrium in the original game. The complexity of GameShrink is ˜O(n2 ), where n is the number of nodes in the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in 8 Bridge is also a game of imperfect information, and partition search does not find the equilibrium for that game either. Instead, partition search is used in conjunction with statistical sampling to simulate the uncertainty in bridge. There are also other bridge programs that use search techniques for perfect information games in conjunction with statistical sampling and expert-defined abstraction [48]. Such (non-game-theoretic) techniques are unlikely to be competitive in poker because of the greater importance of information hiding and bluffing. 168 the size of the game tree. Using GameShrink, we found a minimax equilibrium to Rhode Island Hold``em, a poker game with 3.1 billion nodes in the game tree-over four orders of magnitude more than in the largest poker game solved previously. To further improve scalability, we introduced an approximation variant of GameShrink, which can be used as an anytime algorithm by varying a parameter that controls the coarseness of abstraction. We also discussed how (in a two-player zero-sum game), linear programming can be used in an anytime manner to generate approximately optimal strategies of increasing quality. The method also yields bounds on the suboptimality of the resulting strategies. We are currently working on using these techniques for full-scale 2-player limit Texas Hold``em poker, a highly popular card game whose game tree has about 1018 nodes. That game tree size has required us to use the approximation version of GameShrink (as well as round-based abstraction) [16, 15]. 8. REFERENCES [1] W. Ackermann. Zum Hilbertschen Aufbau der reellen Zahlen. Math. Annalen, 99:118-133, 1928. [2] D. Applegate, G. Jacobson, and D. Sleator. Computer analysis of sprouts. Technical Report CMU-CS-91-144, 1991. [3] R. Bellman and D. Blackwell. Some two-person games involving bluffing. PNAS, 35:600-605, 1949. [4] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron. Approximating game-theoretic optimal strategies for full-scale poker. In IJCAI, 2003. [5] D. Billings, A. Davidson, J. Schaeffer, and D. Szafron. The challenge of poker. Artificial Intelligence, 134:201-240, 2002. [6] B. Bollob´as. Combinatorics. Cambridge University Press, 1986. [7] A. Casajus. Weak isomorphism of extensive games. Mathematical Social Sciences, 46:267-290, 2003. [8] X. Chen and X. Deng. Settling the complexity of 2-player Nash equilibrium. ECCC, Report No. 150, 2005. [9] V. Chv´atal. Linear Programming. W. H. Freeman & Co., 1983. [10] B. P. de Bruin. Game transformations and game equivalence. Technical note x-1999-01, University of Amsterdam, Institute for Logic, Language, and Computation, 1999. [11] S. Elmes and P. J. Reny. On the strategic equivalence of extensive form games. J. of Economic Theory, 62:1-23, 1994. [12] L. R. Ford, Jr. and D. R. Fulkerson. Flows in Networks. Princeton University Press, 1962. [13] A. Gilpin and T. Sandholm. Finding equilibria in large sequential games of imperfect information. Technical Report CMU-CS-05-158, Carnegie Mellon University, 2005. [14] A. Gilpin and T. Sandholm. Optimal Rhode Island Hold``em poker. In AAAI, pages 1684-1685, Pittsburgh, PA, USA, 2005. [15] A. Gilpin and T. Sandholm. A competitive Texas Hold``em poker player via automated abstraction and real-time equilibrium computation. Mimeo, 2006. [16] A. Gilpin and T. Sandholm. A Texas Hold``em poker player based on automated abstraction and real-time equilibrium computation. In AAMAS, Hakodate, Japan, 2006. [17] M. L. Ginsberg. Partition search. In AAAI, pages 228-233, Portland, OR, 1996. [18] M. L. Ginsberg. GIB: Steps toward an expert-level bridge-playing program. In IJCAI, Stockholm, Sweden, 1999. [19] S. Govindan and R. Wilson. A global Newton method to compute Nash equilibria. J. of Econ. Theory, 110:65-86, 2003. [20] C. A. Knoblock. Automatically generating abstractions for planning. Artificial Intelligence, 68(2):243-302, 1994. [21] E. Kohlberg and J.-F. Mertens. On the strategic stability of equilibria. Econometrica, 54:1003-1037, 1986. [22] D. Koller and N. Megiddo. The complexity of two-person zero-sum games in extensive form. Games and Economic Behavior, 4(4):528-552, Oct. 1992. [23] D. Koller and N. Megiddo. Finding mixed strategies with small supports in extensive form games. International Journal of Game Theory, 25:73-92, 1996. [24] D. Koller, N. Megiddo, and B. von Stengel. Efficient computation of equilibria for extensive two-person games. Games and Economic Behavior, 14(2):247-259, 1996. [25] D. Koller and A. Pfeffer. Representations and solutions for game-theoretic problems. Artificial Intelligence, 94(1):167-215, July 1997. [26] D. M. Kreps and R. Wilson. Sequential equilibria. Econometrica, 50(4):863-894, 1982. [27] H. W. Kuhn. Extensive games. PNAS, 36:570-576, 1950. [28] H. W. Kuhn. A simplified two-person poker. In Contributions to the Theory of Games, volume 1 of Annals of Mathematics Studies, 24, pages 97-103. Princeton University Press, 1950. [29] H. W. Kuhn. Extensive games and the problem of information. In Contributions to the Theory of Games, volume 2 of Annals of Mathematics Studies, 28, pages 193-216. Princeton University Press, 1953. [30] C. Lemke and J. Howson. Equilibrium points of bimatrix games. Journal of the Society for Industrial and Applied Mathematics, 12:413-423, 1964. [31] R. Lipton, E. Markakis, and A. Mehta. Playing large games using simple strategies. In ACM-EC, pages 36-41, 2003. [32] C.-L. Liu and M. Wellman. On state-space abstraction for anytime evaluation of Bayesian networks. SIGART Bulletin, 7(2):50-57, 1996. [33] A. Mas-Colell, M. Whinston, and J. R. Green. Microeconomic Theory. Oxford University Press, 1995. [34] R. D. McKelvey and A. McLennan. Computation of equilibria in finite games. In Handbook of Computational Economics, volume 1, pages 87-142. Elsevier, 1996. [35] P. B. Miltersen and T. B. Sørensen. Computing sequential equilibria for two-player games. In SODA, pages 107-116, 2006. [36] J. Nash. Equilibrium points in n-person games. Proc. of the National Academy of Sciences, 36:48-49, 1950. [37] J. F. Nash and L. S. Shapley. A simple three-person poker game. In Contributions to the Theory of Games, volume 1, pages 105-116. Princeton University Press, 1950. [38] A. Perea. Rationality in extensive form games. Kluwer Academic Publishers, 2001. [39] A. Pfeffer, D. Koller, and K. Takusagawa. State-space approximations for extensive form games, July 2000. Talk given at the First International Congress of the Game Theory Society, Bilbao, Spain. [40] R. Porter, E. Nudelman, and Y. Shoham. Simple search methods for finding a Nash equilibrium. In AAAI, pages 664-669, San Jose, CA, USA, 2004. [41] I. Romanovskii. Reduction of a game with complete memory to a matrix game. Soviet Mathematics, 3:678-681, 1962. [42] T. Sandholm and A. Gilpin. Sequences of take-it-or-leave-it offers: Near-optimal auctions without full valuation revelation. In AAMAS, Hakodate, Japan, 2006. [43] T. Sandholm, A. Gilpin, and V. Conitzer. Mixed-integer programming methods for finding Nash equilibria. In AAAI, pages 495-501, Pittsburgh, PA, USA, 2005. [44] R. Savani and B. von Stengel. Exponentially many steps for finding a Nash equilibrium in a bimatrix game. In FOCS, pages 258-267, 2004. [45] R. Selten. Spieltheoretische behandlung eines oligopolmodells mit nachfragetr¨agheit. Zeitschrift f¨ur die gesamte Staatswissenschaft, 12:301-324, 1965. [46] R. Selten. Evolutionary stability in extensive two-person games - correction and further development. Mathematical Social Sciences, 16:223-266, 1988. [47] J. Shi and M. Littman. Abstraction methods for game theoretic poker. In Computers and Games, pages 333-345. Springer-Verlag, 2001. [48] S. J. J. Smith, D. S. Nau, and T. Throop. Computer bridge: A big win for AI planning. AI Magazine, 19(2):93-105, 1998. [49] R. E. Tarjan. Efficiency of a good but not linear set union algorithm. Journal of the ACM, 22(2):215-225, 1975. [50] F. Thompson. Equivalence of games in extensive form. RAND Memo RM-759, The RAND Corporation, Jan. 1952. [51] J. von Neumann and O. Morgenstern. Theory of games and economic behavior. Princeton University Press, 1947. [52] B. von Stengel. Efficient computation of behavior strategies. Games and Economic Behavior, 14(2):220-246, 1996. [53] B. von Stengel. Computing equilibria for two-person games. In Handbook of Game Theory, volume 3. North Holland, Amsterdam, 2002. [54] R. Wilson. Computing equilibria of two-person games from the extensive form. Management Science, 18(7):448-460, 1972. [55] S. J. Wright. Primal-Dual Interior-Point Methods. SIAM, 1997. 169
Finding Equilibria in Large Sequential Games of Imperfect Information * ABSTRACT Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is ˜O (n2), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes--over four orders of magnitude more than in the largest poker game solved previously. We discuss several electronic commerce applications for GameShrink. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies. 1. INTRODUCTION In environments with more than one agent, an agent's outcome is generally affected by the actions of the other * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. agent (s). Consequently, the optimal action of one agent can depend on the others. Game theory provides a normative framework for analyzing such strategic situations. In particular, it provides solution concepts that define what rational behavior is in such settings. The most famous and important solution concept is that of Nash equilibrium [36]. It is a strategy profile (one strategy for each agent) in which no agent has incentive to deviate to a different strategy. However, for the concept to be operational, we need algorithmic techniques for finding an equilibrium. Games can be classified as either games of perfect information or imperfect information. Chess and Go are examples of the former, and, until recently, most game playing work has been on games of this type. To compute an optimal strategy in a perfect information game, an agent traverses the game tree and evaluates individual nodes. If the agent is able to traverse the entire game tree, she simply computes an optimal strategy from the bottom-up, using the principle of backward induction .1 In computer science terms, this is done using minimax search (often in conjunction with αβ-pruning to reduce the search tree size and thus enhance speed). Minimax search runs in linear time in the size of the game tree .2 The differentiating feature of games of imperfect information, such as poker, is that they are not fully observable: when it is an agent's turn to move, she does not have access to all of the information about the world. In such games, the decision of what to do at a point in time cannot generally be optimally made without considering decisions at all other points in time (including ones on other paths of play) because those other decisions affect the probabilities of being at different states at the current point in time. Thus the algorithms for perfect information games do not solve games of imperfect information. For sequential games with imperfect information, one could try to find an equilibrium using the normal (matrix) form, where every contingency plan of the agent is a pure strategy for the agent .3 Unfortunately (even if equivalent strategies are replaced by a single strategy [27]) this representation is generally exponential in the size of the game tree [52]. By observing that one needs to consider only sequences of moves rather than pure strategies [41, 46, 22, 52], one arrives at a more compact representation, the sequence form, which is linear in the size of the game tree .4 For 2-player games, there is a polynomial-sized (in the size of the game tree) linear programming formulation (linear complementarity in the non-zero-sum case) based on the sequence form such that strategies for players 1 and 2 correspond to primal and dual variables. Thus, the equilibria of reasonable-sized 2-player games can be computed using this method [52, 24, 25].5 However, this approach still yields enormous (unsolvable) optimization problems for many real-world games, such as poker. 1.1 Our approach In this paper, we take a different approach to tackling the difficult problem of equilibrium computation. Instead of developing an equilibrium-finding method per se, we instead develop a methodology for automatically abstracting games in such a way that any equilibrium in the smaller (abstracted) game corresponds directly to an equilibrium in the original game. Thus, by computing an equilibrium in the smaller game (using any available equilibrium-finding algorithm), we are able to construct an equilibrium in the original game. The motivation is that an equilibrium for the smaller game can be computed drastically faster than for the original game. To this end, we introduce games with ordered signals (Section 2), a broad class of games that has enough structure which we can exploit for abstraction purposes. Instead of operating directly on the game tree (something we found to be technically challenging), we instead introduce the use of information filters (Section 2.1), which coarsen the information each player receives. They are used in our analysis and abstraction algorithm. By operating only in the space of filters, we are able to keep the strategic structure of the game intact, while abstracting out details of the game in a way that is lossless from the perspective of equilibrium finding. We introduce the ordered game isomorphism to describe strategically symmetric situations and the ordered game isomorphic abstraction transformation to take advantange of such symmetries (Section 3). As our main equilibrium result we have the following: constant number of agents can be constructed in quasipolynomial time [31], but finding an exact equilibrium is PPAD-complete even in a 2-player game [8]. The most prevalent algorithm for finding an equilibrium in a 2-agent game is Lemke-Howson [30], but it takes exponentially many steps in the worst case [44]. For a survey of equilibrium computation in 2-player games, see [53]. Recently, equilibriumfinding algorithms that enumerate supports (i.e., sets of pure strategies that are played with positive probability) have been shown efficient on many games [40], and efficient mixed integer programming algorithms that search in the space of supports have been developed [43]. For more than two players, many algorithms have been proposed, but they currently only scale to very small games [19, 34, 40]. 4There were also early techniques that capitalized in different ways on the fact that in many games the vast majority of pure strategies are not played in equilibrium [54, 23]. 5Recently this approach was extended to handle computing sequential equilibria [26] as well [35]. Theorem 2 Let Γ be a game with ordered signals, and let F be an information filter for Γ. Let F' be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation, and let σ' be a Nash equilibrium strategy profile of the induced game ΓF (i.e., the game Γ using the filter F'). If σ is constructed by using the corresponding strategies of σ', then σ is a Nash equilibrium of ΓF. The proof of the theorem uses an equivalent characterization of Nash equilibria: σ is a Nash equilibrium if and only if there exist beliefs μ (players' beliefs about unknown information) at all points of the game reachable by σ such that σ is sequentially rational (i.e., a best response) given μ, where μ is updated using Bayes' rule. We can then use the fact that σ' is a Nash equilibrium to show that σ is a Nash equilibrium considering only local properties of the game. We also give an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively (Section 4). Its complexity is ˜O (n2), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. We present several algorithmic and data structure related speed improvements (Section 4.1), and we demonstrate how a simple modification to our algorithm yields an approximation algorithm (Section 5). 1.2 Electronic commerce applications Sequential games of imperfect information are ubiquitous, for example in negotiation and in auctions. Often aspects of a player's knowledge are not pertinent for deciding what action the player should take at a given point in the game. On the trivial end, some aspects of a player's knowledge are never pertinent (e.g., whether it is raining or not has no bearing on the bidding strategy in an art auction), and such aspects can be completely left out of the model specification. However, some aspects can be pertinent in certain states of the game while they are not pertinent in other states, and thus cannot be left out of the model completely. Furthermore, it may be highly non-obvious which aspects are pertinent in which states of the game. Our algorithm automatically discovers which aspects are irrelevant in different states, and eliminates those aspects of the game, resulting in a more compact, equivalent game representation. One broad application area that has this property is sequential negotiation (potentially over multiple issues). Another broad application area is sequential auctions (potentially over multiple goods). For example, in those states of a 1-object auction where bidder A can infer that his valuation is greater than that of bidder B, bidder A can ignore all his other information about B's signals, although that information would be relevant for inferring B's exact valuation. Furthermore, in some states of the auction, a bidder might not care which exact other bidders have which valuations, but cares about which valuations are held by the other bidders in aggregate (ignoring their identities). Many open-cry sequential auction and negotiation mechanisms fall within the game model studied in this paper (specified in detail later), as do certain other games in electronic commerce, such as sequences of take-it-or-leave-it offers [42]. Our techniques are in no way specific to an application. The main experiment that we present in this paper is on a recreational game. We chose a particular poker game as the benchmark problem because it yields an extremely complicated and enormous game tree, it is a game of imperfect information, it is fully specified as a game (and the data is available), and it has been posted as a challenge problem by others [47] (to our knowledge no such challenge problem instances have been proposed for electronic commerce applications that require solving sequential games). 1.3 Rhode Island Hold 'em poker Poker is an enormously popular card game played around the world. The 2005 World Series of Poker had over $103 million dollars in total prize money, including $56 million for the main event. Increasingly, poker players compete in online casinos, and television stations regularly broadcast poker tournaments. Poker has been identified as an important research area in AI due to the uncertainty stemming from opponents' cards, opponents' future actions, and chance moves, among other reasons [5]. Almost since the field's founding, game theory has been used to analyze different aspects of poker [28; 37; 3; 51, pp. 186--219]. However, this work was limited to tiny games that could be solved by hand. More recently, AI researchers have been applying the computational power of modern hardware to computing game theory-based strategies for larger games. Koller and Pfeffer determined solutions to poker games with up to 140,000 nodes using the sequence form and linear programming [25]. Large-scale approximations have been developed [4], but those methods do not provide any guarantees about the performance of the computed strategies. Furthermore, the approximations were designed manually by a human expert. Our approach yields an automated abstraction mechanism along with theoretical guarantees on the strategies' performance. Rhode Island Hold 'em was invented as a testbed for computational game playing [47]. It was designed so that it was similar in style to Texas Hold 'em, yet not so large that devising reasonably intelligent strategies would be impossible. (The rules of Rhode Island Hold 'em, as well as a discussion of how Rhode Island Hold 'em can be modeled as a game with ordered signals, that is, it fits in our model, is available in an extended version of this paper [13].) We applied the techniques developed in this paper to find an exact (minimax) solution to Rhode Island Hold 'em, which has a game tree exceeding 3.1 billion nodes. Applying the sequence form to Rhode Island Hold 'em directly without abstraction yields a linear program with 91,224,226 rows, and the same number of columns. This is much too large for (current) linear programming algorithms to handle. We used our GameShrink algorithm to reduce this with lossless abstraction, and it yielded a linear program with 1,237,238 rows and columns--with 50,428,638 non-zero coefficients. We then applied iterated elimination of dominated strategies, which further reduced this to 1,190,443 rows and 1,181,084 columns. (Applying iterated elimination of dominated strategies without GameShrink yielded 89,471,986 rows and 89,121,538 columns, which still would have been prohibitively large to solve.) GameShrink required less than one second to perform the shrinking (i.e., to compute all of the ordered game isomorphic abstraction transformations). Using a 1.65 GHz IBM eServer p5 570 with 64 gigabytes of RAM (the linear program solver actually needed 25 gigabytes), we solved it in 7 days and 17 hours using the interior-point barrier method of CPLEX version 9.1.2. We recently demonstrated our optimal Rhode Island Hold 'em poker player at the AAAI-05 conference [14], and it is available for play on-line at http://www.cs.cmu.edu/ ~ gilpin/gsi. html. While others have worked on computer programs for playing Rhode Island Hold 'em [47], no optimal strategy has been found before. This is the largest poker game solved to date by over four orders of magnitude. 2. GAMES WITH ORDERED SIGNALS We work with a slightly restricted class of games, as compared to the full generality of the extensive form. This class, which we call games with ordered signals, is highly structured, but still general enough to capture a wide range of strategic situations. A game with ordered signals consists of a finite number of rounds. Within a round, the players play a game on a directed tree (the tree can be different in different rounds). The only uncertainty players face stems from private signals the other players have received and from the unknown future signals. In other words, players observe each others' actions, but potentially not nature's actions. In each round, there can be public signals (announced to all players) and private signals (confidentially communicated to individual players). For simplicity, we assume--as is the case in most recreational games--that within each round, the number of private signals received is the same across players (this could quite likely be relaxed). We also assume that the legal actions that a player has are independent of the signals received. For example, in poker, the legal betting actions are independent of the cards received. Finally, the strongest assumption is that there is a partial ordering over sets of signals, and the payoffs are increasing (not necessarily strictly) in these signals. For example, in poker, this partial ordering corresponds exactly to the ranking of card hands. 1. I = {1,..., n} is a finite set of players. 2. G = (G1,..., Gr), Gj = (Vj, Ej), is a finite collection of finite directed trees with nodes Vj and edges Ej. Let Zj denote the leaf nodes of Gj and let Nj (v) denote the outgoing neighbors of v E V j. Gj is the stage game for round j. 3. L = (L1,..., Lr), Lj: Vj \ Zj--+ I indicates which player acts (chooses an outgoing edge) at each internal node in round j. 4. Θ is a finite set of signals. 5. κ = (κ1,..., κr) and γ = (γ1,..., γr) are vectors of nonnegative integers, where κj and γj denote the number of public and private signals (per player), respectively, revealed in round j. Each signal θ E Θ may only be revealed once, and in each round every player receives the same number of private signals, so we require rj = 1 κj + nγj <| Θ |. The public information revealed in round j is αj E Θκj and the public information revealed in all rounds up through round j is ˜αj = (α1,..., αj). The private information revealed to player i E I in round j is βji E Θγj and the private information revaled to player i E I in all minal nodes within a stage game to one of two values: over, in which case the game ends, or continue, in which case the game continues to the next round. Clearly, we require ω (z) = over for all z ∈ Zr. Note that ω is independent of the signals. Let ωjover = {z ∈ Zj | ω (z) = over} and ωjcont = {z ∈ Zj | ω (z) = continue}. ωkcont × ωj over, at least one of the following two conditions holds: through round j and a player's utility is increasing in her private signals, everything else equal: We will use the term game with ordered signals and the term ordered game interchangeably. 2.1 Information filters In this subsection, we define an information filter for ordered games. Instead of completely revealing a signal (either public or private) to a player, the signal first passes through this filter, which outputs a coarsened signal to the player. By varying the filter applied to a game, we are able to obtain a wide variety of games while keeping the underlying action space of the game intact. We will use this when designing our abstraction techniques. Formally, an information filter is as follows. legal signals (i.e., no repeated signals) for one player through round j. An information filter for Γ is a collection F = ~ F1,..., Fr ~ where each Fj is a function Fj: Sj → 2Sj such that each of the following conditions hold: 1. (Truthfulness) (˜αj, ˜βji) ∈ Fj (˜αj, ˜βji) for all legal (˜αj, ˜βji). 2. (Independence) The range of Fj is a partition of Sj. 3. (Information preservation) If two values of a signal are distinguishable in round k, then they are distinguishable fpr each round j> k. Let mj = Ejl = 1 κl + γl. We require that for all legal (θ1,..., θmk,..., θmj) ⊆ Θ and (θ' 1,..., θ' mk,..., θ' mj) ⊆ Θ: (θ' 1,..., θ'm k) ∈ / Fk (θ1,..., θmk) = ⇒ (θ' 1,..., θ'm k,..., θ'm j) ∈ / F j (θ1,..., θmk,..., θmj). A game with ordered signals Γ and an information filter F for Γ defines a new game ΓF. We refer to such games as filtered ordered games. We are left with the original game if we use the identity filter Fj (˜αj, ˜βj ˜αj, ˜βj have the following simple (but important) result: A simple proof proceeds by constructing an extensive form game directly from the ordered game, and showing that it satisfies perfect recall. In determining the payoffs in a game with filtered signals, we take the average over all real signals in the filtered class, weighted by the probability of each real signal occurring. 2.2 Strategies and Nash equilibrium We are now ready to define behavior strategies in the context of filtered ordered games. DEFINITION 3. A behavior strategy for player i in round j of Γ = ~ I, G, L, Θ, κ, γ, p, ~, ω, u ~ with information filter F is a probability distribution over possible actions, and is defined for each player i, each round j, and each v ∈ Vj \ Zj for Lj (v) = i: (Δ (X) is the set of probability distributions over a finite set X.) A behavior strategy for player i in round j is σji = (σji, v1,..., σji, vm) for each vk ∈ Vj \ Zj where Lj (vk) = i. A behavior strategy for player i in Γ is σi = ` σ1i,..., σr ´. i A strategy profile is σ = (σ1,..., σn). A strategy profile with σi replaced by σ' i is (σ' i, σ-i) = (σ1,..., σi-1, σ' i, σi +1,..., σn). By an abuse of notation, we will say player i receives an expected payoff of ui (σ) when all players are playing the strategy profile σ. Strategy σi is said to be player i's best response to σ-i if for all other strategies σ' i for player i we have ui (σi, σ-i) ≥ ui (σ' i, σ-i). σ is a Nash equilibrium if, for every player i, σi is a best response for σ-i. A Nash equilibrium always exists in finite extensive form games [36], and one exists in behavior strategies for games with perfect recall [29]. Using these observations, we have the following corollary to Proposition 1: 3. EQUILIBRIUM-PRESERVING ABSTRACTIONS In this section, we present our main technique for reducing the size of games. We begin by defining a filtered signal tree which represents all of the chance moves in the game. The bold edges (i.e. the first two levels of the tree) in the game trees in Figure 1 correspond to the filtered signal trees in each game. DEFINITION 4. Associated with every ordered game Γ = (I, G, L, Θ, κ, γ, p,> -, ω, u) and information filter F is a filtered signal tree, a directed tree in which each node corresponds to some revealed (filtered) signals and edges correspond to revealing specific (filtered) signals. The nodes in the filtered signal tree represent the set of all possible revealed filtered signals (public and private) at some point in time. The filtered public signals revealed in round j correspond to the nodes in the κj levels beginning at level Pj − 1 ` κk + nγk ´ k = 1 nγk. We denote children of a node x as N (x). In addition, we associate weights with the edges corresponding to the probability of the particular edge being chosen given that its parent was reached. In many games, there are certain situations in the game that can be thought of as being strategically equivalent to other situations in the game. By melding these situations together, it is possible to arrive at a strategically equivalent smaller game. The next two definitions formalize this notion via the introduction of the ordered game isomorphic relation and the ordered game isomorphic abstraction transformation. ordered game isomorphic, and ϑ and ϑ ~ are at either level ` κk + nγk ´ or Pj k = 1 κk + Pj − 1 k = 1 nγk for some roundk = 1 j. The ordered game isomorphic abstraction transformation is given by creating a new information filter F ~: Figure 1 shows the ordered game isomorphic abstraction transformation applied twice to a tiny poker game. Theorem 2, our main equilibrium result, shows how the ordered game isomorphic abstraction transformation can be used to compute equilibria faster. THEOREM 2. Let Γ = (I, G, L, Θ, κ, γ, p,> -, ω, u) be an ordered game and F be an information filter for Γ. Let F ~ be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation. a Nash equilibrium of ΓF. PROOF. For an extensive form game, a belief system μ assigns a probability to every decision node x such that for all other strategies τi, where i is the player who controls h. A basic result [33, Proposition 9.C .1] characterizing Nash equilibria dictates that σ is a Nash equilibrium if and only if there is a belief system μ such that for every information set h with Pr (h I σ)> 0, the following two conditions hold: (C1) σ is sequentially rational at h given μ; and (C2) μ (x) = Pr (h | σ) for all x E h. Since σ ~ is a Nash equilibrium of Γ ~, there exists such a belief system μ ~ for ΓF. Using μ ~, we will construct a belief system μ for Γ and show that conditions C1 and C2 hold, thus supporting σ as a Nash equilibrium. Fix some player i E I. Each of i's information sets in some" round j corresponds to filtered signals Fj "˜α ∗ j, ˜β ∗ j, history i in the first j--1 rounds (z1,..., zj − 1) E j − 1 ωk cont, and his tory so far in round j, v E Vj \ Zj. Let z˜ = (z1,..., zj − 1, v) represent all of the player actions leading to this information set. Thus, we can uniquely specify this information set using the information ˜α ∗ j, ˜β ∗ j, z˜. i Each node in an information set corresponds to the possible private signals the other players have received. Denote by β˜ some legal (Fj (˜αj, ˜βj1),..., Fj (˜αj, ˜βji − 1), F j (˜αj, ˜βji +1),..., Fj (˜αj, ˜βjn)). In other words, there exists (˜αj, ˜βj1,..., ˜βjn) such that notation and write F ~ j βˆ = ˆβ ~.) We can now compute μ Figure 1: GameShrink applied to a tiny two-person four-card (two Jacks and two Kings) poker game. Next to each game tree is the range of the information filter F. Dotted lines denote information sets, which are labeled by the controlling player. Open circles are chance nodes with the indicated transition probabilities. The root node is the chance node for player 1's card, and the next level is for player 2's card. The payment from player 2 to player 1 is given below each leaf. In this example, the algorithm reduces the game tree from 53 nodes to 19 nodes. Pr (ˆβ | F j (˜αj, ˜βj i)) where p ∗ = Pr (ˆβ ~ | F j (˜αj, ˜βj i)). The following three claims show that µ as calculated above supports ~ as a Nash equilibrium. First, we demonstrate a failure when removing the first assumption. Consider the game in Figure 2.6 Nodes a and b are in the same information set, have the same parent (chance) node, have isomorphic subtrees with the same payoffs, and nodes c and d also have similar structural properties. By merging the subtrees beginning at a and b, we get the game on the right in Figure 2. In this game, player 1's only Nash equilibrium strategy is to play left. But in the original game, player 1 knows that node c will never be reached, and so should play right in that information set. The proofs of Claims 1-3 are in an extended version of this paper [13]. By Claims 1 and 2, we know that condition C2 holds. By Claim 3, we know that condition C1 holds. Thus, ~ is a Nash equilibrium. 3.1 Nontriviality of generalizing beyond this model Our model does not capture general sequential games of imperfect information because it is restricted in two ways (as discussed above): 1) there is a special structure connecting the player actions and the chance actions (for one, the players are assumed to observe each others' actions, but nature's actions might not be publicly observable), and 2) there is a common ordering of signals. In this subsection we show that removing either of these conditions can make our technique invalid. Figure 2: Example illustrating difficulty in developing a theory of equilibrium-preserving abstractions for general extensive form games. Removing the second assumption (that the utility functions are based on a common ordering of signals) can also cause failure. Consider a simple three-card game with a deck containing two Jacks (J1 and J2) and a King (K), where player 1's utility function is based on the ordering K> - J1 J2 but player 2's utility function is based on the ordering J2> - K> - J1. It is easy to check that in the abstracted game (where Player 1 treats J1 and J2 as being "equivalent") the Nash equilibrium does not correspond to a Nash equilibrium in the original game .7 4. GAMESHRINK: AN EFFICIENT ALGORITHM FOR COMPUTING ORDERED GAME ISOMORPHIC ABSTRACTION TRANSFORMATIONS This section presents an algorithm, GameShrink, for conducting the abstractions. It only needs to analyze the signal tree discussed above, rather than the entire game tree. We first present a subroutine that GameShrink uses. It is a dynamic program for computing the ordered game isomorphic relation. Again, it operates on the signal tree. ALGORITHM 1. OrderedGameIsomorphic? (Γ, 19, 19') 1. If 19 and 19' have different parents, then return false. 2. If 19 and 19' are both leaves of the signal tree: (a) If ur (19 l ˜z) = ur (19' l ˜z) for all z˜ G wr over, then return true. (b) Otherwise, return false. 3. Create a bipartite graph Gϑ, ϑ = (V1, V2, E) with V1 = N (19) and V2 = N (19'). 4. For each v1 G V1 and v2 G V2: If OrderedGameIsomorphic? (Γ, v1, v2) Create edge (v1, v2) 5. Return true if Gϑ, ϑ has a perfect matching; otherwise, return false. By evaluating this dynamic program from bottom to top, Algorithm 1 determines, in time polynomial in the size of the signal tree, whether or not any pair of equal depth nodes x and y are ordered game isomorphic. We can further speed up this computation by only examining nodes with the same parent, since we know (from step 1) that no nodes with different parents are ordered game isomorphic. The test in step 2 (a) can be computed in O (1) time by consulting the> - relation from the specification of the game. Each call to OrderedGameIsomorphic? performs at most one perfect matching computation on a bipartite graph with O (lel) nodes and O (lel2) edges (recall that e is the set of signals). Using the Ford-Fulkerson algorithm [12] for finding a maximal matching, this takes O (lel3) time. Let S be the maximum number of signals possibly revealed in the game (e.g., in Rhode Island Hold 'em, S = 4 because each of the two players has one card in the hand plus there are two cards on the table). The number of nodes, n, in the signal tree is O (lelS). The dynamic program visits each node in the signal tree, with each visit requiring O (lel2) calls to the OrderedGameIsomorphic? routine. So, it takes O (lelSlel3lel2) = O (lelS +5) time to compute the entire ordered game isomorphic relation. While this is exponential in the number of revealed signals, we now show that it is polynomial in the size of the signal tree--and thus polynomial in the size of the game tree 7We thank an anonymous person for this example. because the signal tree is smaller than the game tree. The number of nodes in the signal tree is and thus the number of leaves in the signal tree is Ω (lelS). Thus, O (lelS +5) = O (nlel5), which proves that we can indeed compute the ordered game isomorphic relation in time polynomial in the number of nodes, n, of the signal tree. The algorithm often runs in sublinear time (and space) in the size of the game tree because the signal tree is significantly smaller than the game tree in most nontrivial games. (Note that the input to the algorithm is not an explicit game tree, but a specification of the rules, so the algorithm does not need to read in the game tree.) See Figure 1. In general, if an ordered game has r rounds, and each round's stage game has at least b nonterminal leaves, then the size of the signal tree is at most br1 of the size of the game tree. For example, in Rhode Island Hold 'em, the game tree has 3.1 billion nodes while the signal tree only has 6,632,705. Given the OrderedGameIsomorphic? routine for determining ordered game isomorphisms in an ordered game, we are ready to present the main algorithm, GameShrink. ALGORITHM 2. GameShrink (Γ) 1. Initialize F to be the identity filter for Γ. 2. For j from 1 to r: 3. Output F. Given as input an ordered game Γ, GameShrink applies the shrinking ideas presented above as aggressively as possible. Once it finishes, there are no contractible nodes (since it compares every pair of nodes at each level of the signal tree), and it outputs the corresponding information filter F. The correctness of GameShrink follows by a repeated application of Theorem 2. Thus, we have the following result: THEOREM 3. GameShrink finds all ordered game isomorphisms and applies the associated ordered game isomorphic abstraction transformations. Furthermore, for any Nash equilibrium, σ', of the abstracted game, the strategy profile constructed for the original game from σ' is a Nash equilibrium. The dominating factor in the run time of GameShrink is in the rth iteration of the main for-loop. There are at most ` 1Θ1 ´ S! nodes at this level, where we again take S to be the S maximum number of signals possibly revealed in the game. „"` 1Θ1 ´ S! Thus, the inner for-loop executes O S discussed in the next subsection, we use a union-find data structure to represent the information filter F. Each iteration of the inner for-loop possibly performs a union operation on the data structure; performing M operations on a union-find data structure containing N elements takes O (α (M, N)) amortized time per operation, where α (M, N) is the inverse Ackermann's function [1, 49] (which grows extremely slowly). Thus, the total time for GameShrink is though this is exponential in S, it is ˜O (n2), where n is the number of nodes in the signal tree. Furthermore, GameShrink tends to actually run in sublinear time and space in the size of the game tree because the signal tree is significantly smaller than the game tree in most nontrivial games, as discussed above. 4.1 Efficiency enhancements We designed several speed enhancement techniques for GameShrink, and all of them are incorporated into our implementation. One technique is the use of the union-find data structure for storing the information filter F. This data structure uses time almost linear in the number of operations [49]. Initially each node in the signalling tree is its own set (this corresponds to the identity information filter); when two nodes are contracted they are joined into a new set. Upon termination, the filtered signals for the abstracted game correspond exactly to the disjoint sets in the data structure. This is an efficient method of recording contractions within the game tree, and the memory requirements are only linear in the size of the signal tree. Determining whether two nodes are ordered game isomorphic requires us to determine if a bipartite graph has a perfect matching. We can eliminate some of these computations by using easy-to-check necessary conditions for the ordered game isomorphic relation to hold. One such condition is to check that the nodes have the same number of chances as being ranked (according to> -) higher than, lower than, and the same as the opponents. We can precompute these frequencies for every game tree node. This substantially speeds up GameShrink, and we can leverage this database across multiple runs of the algorithm (for example, when trying different abstraction levels; see next section). The indices for this database depend on the private and public signals, but not the order in which they were revealed, and thus two nodes may have the same corresponding database entry. This makes the database significantly more compact. a factor ` 50 (For example in Texas Hold 'em, the database is reduced by ´ ` 47 ´ ` 46 ´ / ` 50 ´ = 20.) We store the histograms 3 1 1 5 in a 2-dimensional database. The first dimension is indexed by the private signals, the second by the public signals. The problem of computing the index in (either) one of the dimensions is exactly the problem of computing a bijection between all subsets of size r from a set of size n and integers in ˆ0,..., ` n ´ − 1 ˜. We efficiently compute this using r the subsets' colexicographical ordering [6]. Let {c1,..., cr}, ci E {0,..., n − 1}, denote the r signals and assume that ci <ci +1. We compute a unique index for this set of signals as follows: index (c1,..., cr) = Pr ` ci ´. 5. APPROXIMATION METHODS Some games are too large to compute an exact equilibrium, even after using the presented abstraction technique. This section discusses general techniques for computing approximately optimal strategy profiles. For a two-player game, we can always evaluate the worst-case performance of a strategy, thus providing some objective evaluation of the strength of the strategy. To illustrate this, suppose we know player 2's planned strategy for some game. We can then fix the probabilities of player 2's actions in the game tree as if they were chance moves. Then player 1 is faced with a single-agent decision problem, which can be solved bottomup, maximizing expected payoff at every node. Thus, we can objectively determine the expected worst-case performance of player 2's strategy. This will be most useful when we want to evaluate how well a given strategy performs when we know that it is not an equilibrium strategy. (A variation of this technique may also be applied in n-person games where only one player's strategies are held fixed.) This technique provides ex post guarantees about the worst-case performance of a strategy, and can be used independently of the method that is used to compute the strategies. 5.1 State-space approximations By slightly modifying GameShrink, we can obtain an algorithm that yields even smaller game trees, at the expense of losing the equilibrium guarantees of Theorem 2. Instead of requiring the payoffs at terminal nodes to match exactly, we can instead compute a penalty that increases as the difference in utility between two nodes increases. There are many ways in which the penalty function could be defined and implemented. One possibility is to create edge weights in the bipartite graphs used in Algorithm 1, and then instead of requiring perfect matchings in the unweighted graph we would instead require perfect matchings with low cost (i.e., only consider two nodes to be ordered game isomorphic if the corresponding bipartite graph has a perfect matching with cost below some threshold). Thus, with this threshold as a parameter, we have a knob to turn that in one extreme (threshold = 0) yields an optimal abstraction and in the other extreme (threshold = oo) yields a highly abstracted game (this would in effect restrict players to ignoring all signals, but still observing actions). This knob also begets an anytime algorithm. One can solve increasingly less abstracted versions of the game, and evaluate the quality of the solution at every iteration using the ex post method discussed above. 5.2 Algorithmic approximations In the case of two-player zero-sum games, the equilibrium computation can be modeled as a linear program (LP), which can in turn be solved using the simplex method. This approach has inherent features which we can leverage into desirable properties in the context of solving games. In the LP, primal solutions correspond to strategies of player 2, and dual solutions correspond to strategies of player 1. There are two versions of the simplex method: the primal simplex and the dual simplex. The primal simplex maintains primal feasibility and proceeds by finding better and better primal solutions until the dual solution vector is feasible, at which point optimality has been reached. Analogously, the dual simplex maintains dual feasibility and proceeds by finding increasingly better dual solutions until the primal solution vector is feasible. (The dual simplex method can be thought of as running the primal simplex method on the dual problem.) Thus, the primal and dual simplex methods serve as anytime algorithms (for a given abstraction) for players 2 and 1, respectively. At any point in time, they can output the best strategies found so far. Also, for any feasible solution to the LP, we can get bounds on the quality of the strategies by examining the primal and dual solutions. (When using the primal simplex method, dual solutions may be read off of the LP tableau.) Every feasible solution of the dual yields an upper bound on the optimal value of the primal, and vice versa [9, p. 57]. Thus, without requiring further computation, we get lower bounds on the expected utility of each agent's strategy against that agent's worst-case opponent. One problem with the simplex method is that it is not a primal-dual algorithm, that is, it does not maintain both primal and dual feasibility throughout its execution. (In fact, it only obtains primal and dual feasibility at the very end of execution.) In contrast, there are interior-point methods for linear programming that maintain primal and dual feasibility throughout the execution. For example, many interiorpoint path-following algorithms have this property [55, Ch. 5]. We observe that running such a linear programming method yields a method for finding E-equilibria (i.e., strategy profiles in which no agent can increase her expected utility by more than E by deviating). A threshold on E can also be used as a termination criterion for using the method as an anytime algorithm. Furthermore, interior-point methods in this class have polynomial-time worst-case run time, as opposed to the simplex algorithm, which takes exponentially many steps in the worst case. 6. RELATED RESEARCH Functions that transform extensive form games have been introduced [50, 11]. In contrast to our work, those approaches were not for making the game smaller and easier to solve. The main result is that a game can be derived from another by a sequence of those transformations if and only if the games have the same pure reduced normal form. The pure reduced normal form is the extensive form game represented as a game in normal form where duplicates of pure strategies (i.e., ones with identical payoffs) are removed and players essentially select equivalence classes of strategies [27]. An extension to that work shows a similar result, but for slightly different transformations and mixed reduced normal form games [21]. Modern treatments of this prior work on game transformations exist [38, Ch. 6], [10]. The recent notion of weak isomorphism in extensive form games [7] is related to our notion of restricted game isomorphism. The motivation of that work was to justify solution concepts by arguing that they are invariant with respect to isomorphic transformations. Indeed, the author shows, among other things, that many solution concepts, including Nash, perfect, subgame perfect, and sequential equilibrium, are invariant with respect to weak isomorphisms. However, that definition requires that the games to be tested for weak isomorphism are of the same size. Our focus is totally different: we find strategically equivalent smaller games. Also, their paper does not provide algorithms. Abstraction techniques have been used in artificial intelligence research before. In contrast to our work, most (but not all) research involving abstraction has been for singleagent problems (e.g. [20, 32]). Furthermore, the use of abstraction typically leads to sub-optimal solutions, unlike the techniques presented in this paper, which yield optimal solutions. A notable exception is the use of abstraction to compute optimal strategies for the game of Sprouts [2]. However, a significant difference to our work is that Sprouts is a game of perfect information. One of the first pieces of research to use abstraction in multi-agent settings was the development of partition search, which is the algorithm behind GIB, the world's first expertlevel computer bridge player [17, 18]. In contrast to other game tree search algorithms which store a particular game position at each node of the search tree, partition search stores groups of positions that are similar. (Typically, the similarity of two game positions is computed by ignoring the less important components of each game position and then checking whether the abstracted positions are similar--in some domain-specific expert-defined sense--to each other.) Partition search can lead to substantial speed improvements over α-β-search. However, it is not game theory-based (it does not consider information sets in the game tree), and thus does not solve for the equilibrium of a game of imperfect information, such as poker .8 Another difference is that the abstraction is defined by an expert human while our abstractions are determined automatically. There has been some research on the use of abstraction for imperfect information games. Most notably, Billings et al [4] describe a manually constructed abstraction for Texas Hold 'em poker, and include promising results against expert players. However, this approach has significant drawbacks. First, it is highly specialized for Texas Hold 'em. Second, a large amount of expert knowledge and effort was used in constructing the abstraction. Third, the abstraction does not preserve equilibrium: even if applied to a smaller game, it might not yield a game-theoretic equilibrium. Promising ideas for abstraction in the context of general extensive form games have been described in an extended abstract [39], but to our knowledge, have not been fully developed. 7. CONCLUSIONS AND DISCUSSION We introduced the ordered game isomorphic abstraction transformation and gave an algorithm, GameShrink, for abstracting the game using the isomorphism exhaustively. We proved that in games with ordered signals, any Nash equilibrium in the smaller abstracted game maps directly to a Nash equilibrium in the original game. The complexity of GameShrink is ˜O (n2), where n is the number of nodes in the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in 8Bridge is also a game of imperfect information, and partition search does not find the equilibrium for that game either. Instead, partition search is used in conjunction with statistical sampling to simulate the uncertainty in bridge. There are also other bridge programs that use search techniques for perfect information games in conjunction with statistical sampling and expert-defined abstraction [48]. Such (non-game-theoretic) techniques are unlikely to be competitive in poker because of the greater importance of information hiding and bluffing. the size of the game tree. Using GameShrink, we found a minimax equilibrium to Rhode Island Hold 'em, a poker game with 3.1 billion nodes in the game tree--over four orders of magnitude more than in the largest poker game solved previously. To further improve scalability, we introduced an approximation variant of GameShrink, which can be used as an anytime algorithm by varying a parameter that controls the coarseness of abstraction. We also discussed how (in a two-player zero-sum game), linear programming can be used in an anytime manner to generate approximately optimal strategies of increasing quality. The method also yields bounds on the suboptimality of the resulting strategies. We are currently working on using these techniques for full-scale 2-player limit Texas Hold 'em poker, a highly popular card game whose game tree has about 1018 nodes. That game tree size has required us to use the approximation version of GameShrink (as well as round-based abstraction) [16, 15].
Finding Equilibria in Large Sequential Games of Imperfect Information * ABSTRACT Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is ˜O (n2), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes--over four orders of magnitude more than in the largest poker game solved previously. We discuss several electronic commerce applications for GameShrink. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies. 1. INTRODUCTION In environments with more than one agent, an agent's outcome is generally affected by the actions of the other * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. agent (s). Consequently, the optimal action of one agent can depend on the others. Game theory provides a normative framework for analyzing such strategic situations. In particular, it provides solution concepts that define what rational behavior is in such settings. The most famous and important solution concept is that of Nash equilibrium [36]. It is a strategy profile (one strategy for each agent) in which no agent has incentive to deviate to a different strategy. However, for the concept to be operational, we need algorithmic techniques for finding an equilibrium. Games can be classified as either games of perfect information or imperfect information. Chess and Go are examples of the former, and, until recently, most game playing work has been on games of this type. To compute an optimal strategy in a perfect information game, an agent traverses the game tree and evaluates individual nodes. If the agent is able to traverse the entire game tree, she simply computes an optimal strategy from the bottom-up, using the principle of backward induction .1 In computer science terms, this is done using minimax search (often in conjunction with αβ-pruning to reduce the search tree size and thus enhance speed). Minimax search runs in linear time in the size of the game tree .2 The differentiating feature of games of imperfect information, such as poker, is that they are not fully observable: when it is an agent's turn to move, she does not have access to all of the information about the world. In such games, the decision of what to do at a point in time cannot generally be optimally made without considering decisions at all other points in time (including ones on other paths of play) because those other decisions affect the probabilities of being at different states at the current point in time. Thus the algorithms for perfect information games do not solve games of imperfect information. For sequential games with imperfect information, one could try to find an equilibrium using the normal (matrix) form, where every contingency plan of the agent is a pure strategy for the agent .3 Unfortunately (even if equivalent strategies are replaced by a single strategy [27]) this representation is generally exponential in the size of the game tree [52]. By observing that one needs to consider only sequences of moves rather than pure strategies [41, 46, 22, 52], one arrives at a more compact representation, the sequence form, which is linear in the size of the game tree .4 For 2-player games, there is a polynomial-sized (in the size of the game tree) linear programming formulation (linear complementarity in the non-zero-sum case) based on the sequence form such that strategies for players 1 and 2 correspond to primal and dual variables. Thus, the equilibria of reasonable-sized 2-player games can be computed using this method [52, 24, 25].5 However, this approach still yields enormous (unsolvable) optimization problems for many real-world games, such as poker. 1.1 Our approach In this paper, we take a different approach to tackling the difficult problem of equilibrium computation. Instead of developing an equilibrium-finding method per se, we instead develop a methodology for automatically abstracting games in such a way that any equilibrium in the smaller (abstracted) game corresponds directly to an equilibrium in the original game. Thus, by computing an equilibrium in the smaller game (using any available equilibrium-finding algorithm), we are able to construct an equilibrium in the original game. The motivation is that an equilibrium for the smaller game can be computed drastically faster than for the original game. To this end, we introduce games with ordered signals (Section 2), a broad class of games that has enough structure which we can exploit for abstraction purposes. Instead of operating directly on the game tree (something we found to be technically challenging), we instead introduce the use of information filters (Section 2.1), which coarsen the information each player receives. They are used in our analysis and abstraction algorithm. By operating only in the space of filters, we are able to keep the strategic structure of the game intact, while abstracting out details of the game in a way that is lossless from the perspective of equilibrium finding. We introduce the ordered game isomorphism to describe strategically symmetric situations and the ordered game isomorphic abstraction transformation to take advantange of such symmetries (Section 3). As our main equilibrium result we have the following: constant number of agents can be constructed in quasipolynomial time [31], but finding an exact equilibrium is PPAD-complete even in a 2-player game [8]. The most prevalent algorithm for finding an equilibrium in a 2-agent game is Lemke-Howson [30], but it takes exponentially many steps in the worst case [44]. For a survey of equilibrium computation in 2-player games, see [53]. Recently, equilibriumfinding algorithms that enumerate supports (i.e., sets of pure strategies that are played with positive probability) have been shown efficient on many games [40], and efficient mixed integer programming algorithms that search in the space of supports have been developed [43]. For more than two players, many algorithms have been proposed, but they currently only scale to very small games [19, 34, 40]. 4There were also early techniques that capitalized in different ways on the fact that in many games the vast majority of pure strategies are not played in equilibrium [54, 23]. 5Recently this approach was extended to handle computing sequential equilibria [26] as well [35]. Theorem 2 Let Γ be a game with ordered signals, and let F be an information filter for Γ. Let F' be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation, and let σ' be a Nash equilibrium strategy profile of the induced game ΓF (i.e., the game Γ using the filter F'). If σ is constructed by using the corresponding strategies of σ', then σ is a Nash equilibrium of ΓF. The proof of the theorem uses an equivalent characterization of Nash equilibria: σ is a Nash equilibrium if and only if there exist beliefs μ (players' beliefs about unknown information) at all points of the game reachable by σ such that σ is sequentially rational (i.e., a best response) given μ, where μ is updated using Bayes' rule. We can then use the fact that σ' is a Nash equilibrium to show that σ is a Nash equilibrium considering only local properties of the game. We also give an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively (Section 4). Its complexity is ˜O (n2), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. We present several algorithmic and data structure related speed improvements (Section 4.1), and we demonstrate how a simple modification to our algorithm yields an approximation algorithm (Section 5). 1.2 Electronic commerce applications Sequential games of imperfect information are ubiquitous, for example in negotiation and in auctions. Often aspects of a player's knowledge are not pertinent for deciding what action the player should take at a given point in the game. On the trivial end, some aspects of a player's knowledge are never pertinent (e.g., whether it is raining or not has no bearing on the bidding strategy in an art auction), and such aspects can be completely left out of the model specification. However, some aspects can be pertinent in certain states of the game while they are not pertinent in other states, and thus cannot be left out of the model completely. Furthermore, it may be highly non-obvious which aspects are pertinent in which states of the game. Our algorithm automatically discovers which aspects are irrelevant in different states, and eliminates those aspects of the game, resulting in a more compact, equivalent game representation. One broad application area that has this property is sequential negotiation (potentially over multiple issues). Another broad application area is sequential auctions (potentially over multiple goods). For example, in those states of a 1-object auction where bidder A can infer that his valuation is greater than that of bidder B, bidder A can ignore all his other information about B's signals, although that information would be relevant for inferring B's exact valuation. Furthermore, in some states of the auction, a bidder might not care which exact other bidders have which valuations, but cares about which valuations are held by the other bidders in aggregate (ignoring their identities). Many open-cry sequential auction and negotiation mechanisms fall within the game model studied in this paper (specified in detail later), as do certain other games in electronic commerce, such as sequences of take-it-or-leave-it offers [42]. Our techniques are in no way specific to an application. The main experiment that we present in this paper is on a recreational game. We chose a particular poker game as the benchmark problem because it yields an extremely complicated and enormous game tree, it is a game of imperfect information, it is fully specified as a game (and the data is available), and it has been posted as a challenge problem by others [47] (to our knowledge no such challenge problem instances have been proposed for electronic commerce applications that require solving sequential games). 1.3 Rhode Island Hold 'em poker Poker is an enormously popular card game played around the world. The 2005 World Series of Poker had over $103 million dollars in total prize money, including $56 million for the main event. Increasingly, poker players compete in online casinos, and television stations regularly broadcast poker tournaments. Poker has been identified as an important research area in AI due to the uncertainty stemming from opponents' cards, opponents' future actions, and chance moves, among other reasons [5]. Almost since the field's founding, game theory has been used to analyze different aspects of poker [28; 37; 3; 51, pp. 186--219]. However, this work was limited to tiny games that could be solved by hand. More recently, AI researchers have been applying the computational power of modern hardware to computing game theory-based strategies for larger games. Koller and Pfeffer determined solutions to poker games with up to 140,000 nodes using the sequence form and linear programming [25]. Large-scale approximations have been developed [4], but those methods do not provide any guarantees about the performance of the computed strategies. Furthermore, the approximations were designed manually by a human expert. Our approach yields an automated abstraction mechanism along with theoretical guarantees on the strategies' performance. Rhode Island Hold 'em was invented as a testbed for computational game playing [47]. It was designed so that it was similar in style to Texas Hold 'em, yet not so large that devising reasonably intelligent strategies would be impossible. (The rules of Rhode Island Hold 'em, as well as a discussion of how Rhode Island Hold 'em can be modeled as a game with ordered signals, that is, it fits in our model, is available in an extended version of this paper [13].) We applied the techniques developed in this paper to find an exact (minimax) solution to Rhode Island Hold 'em, which has a game tree exceeding 3.1 billion nodes. Applying the sequence form to Rhode Island Hold 'em directly without abstraction yields a linear program with 91,224,226 rows, and the same number of columns. This is much too large for (current) linear programming algorithms to handle. We used our GameShrink algorithm to reduce this with lossless abstraction, and it yielded a linear program with 1,237,238 rows and columns--with 50,428,638 non-zero coefficients. We then applied iterated elimination of dominated strategies, which further reduced this to 1,190,443 rows and 1,181,084 columns. (Applying iterated elimination of dominated strategies without GameShrink yielded 89,471,986 rows and 89,121,538 columns, which still would have been prohibitively large to solve.) GameShrink required less than one second to perform the shrinking (i.e., to compute all of the ordered game isomorphic abstraction transformations). Using a 1.65 GHz IBM eServer p5 570 with 64 gigabytes of RAM (the linear program solver actually needed 25 gigabytes), we solved it in 7 days and 17 hours using the interior-point barrier method of CPLEX version 9.1.2. We recently demonstrated our optimal Rhode Island Hold 'em poker player at the AAAI-05 conference [14], and it is available for play on-line at http://www.cs.cmu.edu/ ~ gilpin/gsi. html. While others have worked on computer programs for playing Rhode Island Hold 'em [47], no optimal strategy has been found before. This is the largest poker game solved to date by over four orders of magnitude. 2. GAMES WITH ORDERED SIGNALS 2.1 Information filters 2.2 Strategies and Nash equilibrium 3. EQUILIBRIUM-PRESERVING ABSTRACTIONS 4. GAMESHRINK: AN EFFICIENT ALGORITHM FOR COMPUTING ORDERED GAME ISOMORPHIC ABSTRACTION TRANSFORMATIONS 4.1 Efficiency enhancements 5. APPROXIMATION METHODS 5.1 State-space approximations 5.2 Algorithmic approximations 6. RELATED RESEARCH Functions that transform extensive form games have been introduced [50, 11]. In contrast to our work, those approaches were not for making the game smaller and easier to solve. The main result is that a game can be derived from another by a sequence of those transformations if and only if the games have the same pure reduced normal form. The pure reduced normal form is the extensive form game represented as a game in normal form where duplicates of pure strategies (i.e., ones with identical payoffs) are removed and players essentially select equivalence classes of strategies [27]. An extension to that work shows a similar result, but for slightly different transformations and mixed reduced normal form games [21]. Modern treatments of this prior work on game transformations exist [38, Ch. 6], [10]. The recent notion of weak isomorphism in extensive form games [7] is related to our notion of restricted game isomorphism. The motivation of that work was to justify solution concepts by arguing that they are invariant with respect to isomorphic transformations. Indeed, the author shows, among other things, that many solution concepts, including Nash, perfect, subgame perfect, and sequential equilibrium, are invariant with respect to weak isomorphisms. However, that definition requires that the games to be tested for weak isomorphism are of the same size. Our focus is totally different: we find strategically equivalent smaller games. Also, their paper does not provide algorithms. Abstraction techniques have been used in artificial intelligence research before. In contrast to our work, most (but not all) research involving abstraction has been for singleagent problems (e.g. [20, 32]). Furthermore, the use of abstraction typically leads to sub-optimal solutions, unlike the techniques presented in this paper, which yield optimal solutions. A notable exception is the use of abstraction to compute optimal strategies for the game of Sprouts [2]. However, a significant difference to our work is that Sprouts is a game of perfect information. One of the first pieces of research to use abstraction in multi-agent settings was the development of partition search, which is the algorithm behind GIB, the world's first expertlevel computer bridge player [17, 18]. In contrast to other game tree search algorithms which store a particular game position at each node of the search tree, partition search stores groups of positions that are similar. (Typically, the similarity of two game positions is computed by ignoring the less important components of each game position and then checking whether the abstracted positions are similar--in some domain-specific expert-defined sense--to each other.) Partition search can lead to substantial speed improvements over α-β-search. However, it is not game theory-based (it does not consider information sets in the game tree), and thus does not solve for the equilibrium of a game of imperfect information, such as poker .8 Another difference is that the abstraction is defined by an expert human while our abstractions are determined automatically. There has been some research on the use of abstraction for imperfect information games. Most notably, Billings et al [4] describe a manually constructed abstraction for Texas Hold 'em poker, and include promising results against expert players. However, this approach has significant drawbacks. First, it is highly specialized for Texas Hold 'em. Second, a large amount of expert knowledge and effort was used in constructing the abstraction. Third, the abstraction does not preserve equilibrium: even if applied to a smaller game, it might not yield a game-theoretic equilibrium. Promising ideas for abstraction in the context of general extensive form games have been described in an extended abstract [39], but to our knowledge, have not been fully developed. 7. CONCLUSIONS AND DISCUSSION We introduced the ordered game isomorphic abstraction transformation and gave an algorithm, GameShrink, for abstracting the game using the isomorphism exhaustively. We proved that in games with ordered signals, any Nash equilibrium in the smaller abstracted game maps directly to a Nash equilibrium in the original game. The complexity of GameShrink is ˜O (n2), where n is the number of nodes in the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in 8Bridge is also a game of imperfect information, and partition search does not find the equilibrium for that game either. Instead, partition search is used in conjunction with statistical sampling to simulate the uncertainty in bridge. There are also other bridge programs that use search techniques for perfect information games in conjunction with statistical sampling and expert-defined abstraction [48]. Such (non-game-theoretic) techniques are unlikely to be competitive in poker because of the greater importance of information hiding and bluffing. the size of the game tree. Using GameShrink, we found a minimax equilibrium to Rhode Island Hold 'em, a poker game with 3.1 billion nodes in the game tree--over four orders of magnitude more than in the largest poker game solved previously. To further improve scalability, we introduced an approximation variant of GameShrink, which can be used as an anytime algorithm by varying a parameter that controls the coarseness of abstraction. We also discussed how (in a two-player zero-sum game), linear programming can be used in an anytime manner to generate approximately optimal strategies of increasing quality. The method also yields bounds on the suboptimality of the resulting strategies. We are currently working on using these techniques for full-scale 2-player limit Texas Hold 'em poker, a highly popular card game whose game tree has about 1018 nodes. That game tree size has required us to use the approximation version of GameShrink (as well as round-based abstraction) [16, 15].
Finding Equilibria in Large Sequential Games of Imperfect Information * ABSTRACT Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is ˜O (n2), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes--over four orders of magnitude more than in the largest poker game solved previously. We discuss several electronic commerce applications for GameShrink. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies. 1. INTRODUCTION agent (s). Consequently, the optimal action of one agent can depend on the others. Game theory provides a normative framework for analyzing such strategic situations. The most famous and important solution concept is that of Nash equilibrium [36]. It is a strategy profile (one strategy for each agent) in which no agent has incentive to deviate to a different strategy. However, for the concept to be operational, we need algorithmic techniques for finding an equilibrium. Games can be classified as either games of perfect information or imperfect information. Chess and Go are examples of the former, and, until recently, most game playing work has been on games of this type. To compute an optimal strategy in a perfect information game, an agent traverses the game tree and evaluates individual nodes. Minimax search runs in linear time in the size of the game tree .2 The differentiating feature of games of imperfect information, such as poker, is that they are not fully observable: when it is an agent's turn to move, she does not have access to all of the information about the world. Thus the algorithms for perfect information games do not solve games of imperfect information. For sequential games with imperfect information, one could try to find an equilibrium using the normal (matrix) form, where every contingency plan of the agent is a pure strategy for the agent .3 Unfortunately (even if equivalent strategies are replaced by a single strategy [27]) this representation is generally exponential in the size of the game tree [52]. Thus, the equilibria of reasonable-sized 2-player games can be computed using this method [52, 24, 25].5 However, this approach still yields enormous (unsolvable) optimization problems for many real-world games, such as poker. 1.1 Our approach In this paper, we take a different approach to tackling the difficult problem of equilibrium computation. Instead of developing an equilibrium-finding method per se, we instead develop a methodology for automatically abstracting games in such a way that any equilibrium in the smaller (abstracted) game corresponds directly to an equilibrium in the original game. Thus, by computing an equilibrium in the smaller game (using any available equilibrium-finding algorithm), we are able to construct an equilibrium in the original game. The motivation is that an equilibrium for the smaller game can be computed drastically faster than for the original game. To this end, we introduce games with ordered signals (Section 2), a broad class of games that has enough structure which we can exploit for abstraction purposes. Instead of operating directly on the game tree (something we found to be technically challenging), we instead introduce the use of information filters (Section 2.1), which coarsen the information each player receives. They are used in our analysis and abstraction algorithm. By operating only in the space of filters, we are able to keep the strategic structure of the game intact, while abstracting out details of the game in a way that is lossless from the perspective of equilibrium finding. We introduce the ordered game isomorphism to describe strategically symmetric situations and the ordered game isomorphic abstraction transformation to take advantange of such symmetries (Section 3). As our main equilibrium result we have the following: constant number of agents can be constructed in quasipolynomial time [31], but finding an exact equilibrium is PPAD-complete even in a 2-player game [8]. The most prevalent algorithm for finding an equilibrium in a 2-agent game is Lemke-Howson [30], but it takes exponentially many steps in the worst case [44]. For a survey of equilibrium computation in 2-player games, see [53]. For more than two players, many algorithms have been proposed, but they currently only scale to very small games [19, 34, 40]. 4There were also early techniques that capitalized in different ways on the fact that in many games the vast majority of pure strategies are not played in equilibrium [54, 23]. 5Recently this approach was extended to handle computing sequential equilibria [26] as well [35]. Theorem 2 Let Γ be a game with ordered signals, and let F be an information filter for Γ. Let F' be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation, and let σ' be a Nash equilibrium strategy profile of the induced game ΓF (i.e., the game Γ using the filter F'). If σ is constructed by using the corresponding strategies of σ', then σ is a Nash equilibrium of ΓF. We can then use the fact that σ' is a Nash equilibrium to show that σ is a Nash equilibrium considering only local properties of the game. We also give an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively (Section 4). Its complexity is ˜O (n2), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. 1.2 Electronic commerce applications Sequential games of imperfect information are ubiquitous, for example in negotiation and in auctions. Often aspects of a player's knowledge are not pertinent for deciding what action the player should take at a given point in the game. However, some aspects can be pertinent in certain states of the game while they are not pertinent in other states, and thus cannot be left out of the model completely. Furthermore, it may be highly non-obvious which aspects are pertinent in which states of the game. Our algorithm automatically discovers which aspects are irrelevant in different states, and eliminates those aspects of the game, resulting in a more compact, equivalent game representation. One broad application area that has this property is sequential negotiation (potentially over multiple issues). Another broad application area is sequential auctions (potentially over multiple goods). Many open-cry sequential auction and negotiation mechanisms fall within the game model studied in this paper (specified in detail later), as do certain other games in electronic commerce, such as sequences of take-it-or-leave-it offers [42]. Our techniques are in no way specific to an application. The main experiment that we present in this paper is on a recreational game. 1.3 Rhode Island Hold 'em poker Poker is an enormously popular card game played around the world. Increasingly, poker players compete in online casinos, and television stations regularly broadcast poker tournaments. Almost since the field's founding, game theory has been used to analyze different aspects of poker [28; 37; 3; 51, pp. 186--219]. However, this work was limited to tiny games that could be solved by hand. More recently, AI researchers have been applying the computational power of modern hardware to computing game theory-based strategies for larger games. Koller and Pfeffer determined solutions to poker games with up to 140,000 nodes using the sequence form and linear programming [25]. Large-scale approximations have been developed [4], but those methods do not provide any guarantees about the performance of the computed strategies. Furthermore, the approximations were designed manually by a human expert. Our approach yields an automated abstraction mechanism along with theoretical guarantees on the strategies' performance. Rhode Island Hold 'em was invented as a testbed for computational game playing [47]. It was designed so that it was similar in style to Texas Hold 'em, yet not so large that devising reasonably intelligent strategies would be impossible. (The rules of Rhode Island Hold 'em, as well as a discussion of how Rhode Island Hold 'em can be modeled as a game with ordered signals, that is, it fits in our model, is available in an extended version of this paper [13].) We applied the techniques developed in this paper to find an exact (minimax) solution to Rhode Island Hold 'em, which has a game tree exceeding 3.1 billion nodes. Applying the sequence form to Rhode Island Hold 'em directly without abstraction yields a linear program with 91,224,226 rows, and the same number of columns. This is much too large for (current) linear programming algorithms to handle. We used our GameShrink algorithm to reduce this with lossless abstraction, and it yielded a linear program with 1,237,238 rows and columns--with 50,428,638 non-zero coefficients. We then applied iterated elimination of dominated strategies, which further reduced this to 1,190,443 rows and 1,181,084 columns. (Applying iterated elimination of dominated strategies without GameShrink yielded 89,471,986 rows and 89,121,538 columns, which still would have been prohibitively large to solve.) GameShrink required less than one second to perform the shrinking (i.e., to compute all of the ordered game isomorphic abstraction transformations). We recently demonstrated our optimal Rhode Island Hold 'em poker player at the AAAI-05 conference [14], and it is available for play on-line at http://www.cs.cmu.edu/ ~ gilpin/gsi. html. While others have worked on computer programs for playing Rhode Island Hold 'em [47], no optimal strategy has been found before. This is the largest poker game solved to date by over four orders of magnitude. 6. RELATED RESEARCH Functions that transform extensive form games have been introduced [50, 11]. In contrast to our work, those approaches were not for making the game smaller and easier to solve. The main result is that a game can be derived from another by a sequence of those transformations if and only if the games have the same pure reduced normal form. The pure reduced normal form is the extensive form game represented as a game in normal form where duplicates of pure strategies (i.e., ones with identical payoffs) are removed and players essentially select equivalence classes of strategies [27]. An extension to that work shows a similar result, but for slightly different transformations and mixed reduced normal form games [21]. Modern treatments of this prior work on game transformations exist [38, Ch. The recent notion of weak isomorphism in extensive form games [7] is related to our notion of restricted game isomorphism. The motivation of that work was to justify solution concepts by arguing that they are invariant with respect to isomorphic transformations. However, that definition requires that the games to be tested for weak isomorphism are of the same size. Our focus is totally different: we find strategically equivalent smaller games. Also, their paper does not provide algorithms. Abstraction techniques have been used in artificial intelligence research before. In contrast to our work, most (but not all) research involving abstraction has been for singleagent problems (e.g. [20, 32]). Furthermore, the use of abstraction typically leads to sub-optimal solutions, unlike the techniques presented in this paper, which yield optimal solutions. A notable exception is the use of abstraction to compute optimal strategies for the game of Sprouts [2]. However, a significant difference to our work is that Sprouts is a game of perfect information. In contrast to other game tree search algorithms which store a particular game position at each node of the search tree, partition search stores groups of positions that are similar. Partition search can lead to substantial speed improvements over α-β-search. However, it is not game theory-based (it does not consider information sets in the game tree), and thus does not solve for the equilibrium of a game of imperfect information, such as poker .8 Another difference is that the abstraction is defined by an expert human while our abstractions are determined automatically. There has been some research on the use of abstraction for imperfect information games. Most notably, Billings et al [4] describe a manually constructed abstraction for Texas Hold 'em poker, and include promising results against expert players. However, this approach has significant drawbacks. First, it is highly specialized for Texas Hold 'em. Second, a large amount of expert knowledge and effort was used in constructing the abstraction. Third, the abstraction does not preserve equilibrium: even if applied to a smaller game, it might not yield a game-theoretic equilibrium. Promising ideas for abstraction in the context of general extensive form games have been described in an extended abstract [39], but to our knowledge, have not been fully developed. 7. CONCLUSIONS AND DISCUSSION We introduced the ordered game isomorphic abstraction transformation and gave an algorithm, GameShrink, for abstracting the game using the isomorphism exhaustively. We proved that in games with ordered signals, any Nash equilibrium in the smaller abstracted game maps directly to a Nash equilibrium in the original game. The complexity of GameShrink is ˜O (n2), where n is the number of nodes in the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in 8Bridge is also a game of imperfect information, and partition search does not find the equilibrium for that game either. Instead, partition search is used in conjunction with statistical sampling to simulate the uncertainty in bridge. There are also other bridge programs that use search techniques for perfect information games in conjunction with statistical sampling and expert-defined abstraction [48]. Such (non-game-theoretic) techniques are unlikely to be competitive in poker because of the greater importance of information hiding and bluffing. the size of the game tree. Using GameShrink, we found a minimax equilibrium to Rhode Island Hold 'em, a poker game with 3.1 billion nodes in the game tree--over four orders of magnitude more than in the largest poker game solved previously. To further improve scalability, we introduced an approximation variant of GameShrink, which can be used as an anytime algorithm by varying a parameter that controls the coarseness of abstraction. We also discussed how (in a two-player zero-sum game), linear programming can be used in an anytime manner to generate approximately optimal strategies of increasing quality. The method also yields bounds on the suboptimality of the resulting strategies. We are currently working on using these techniques for full-scale 2-player limit Texas Hold 'em poker, a highly popular card game whose game tree has about 1018 nodes. That game tree size has required us to use the approximation version of GameShrink (as well as round-based abstraction) [16, 15].
J-35
Efficiency and Nash Equilibria in a Scrip System for P2P Networks
A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking.
[ "scrip system", "p2p network", "nash equilibrium", "social welfar", "agent", "onlin system", "gnutellum network", "reput system", "bittorr", "emul", "game", "maximum entropi", "threshold strategi", "game theori" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "U", "U", "U", "U", "U", "U" ]
Efficiency and Nash Equilibria in a Scrip System for P2P Networks Eric J. Friedman School of Operations Research and Industrial Engineering Cornell University ejf27@cornell.edu Joseph Y. Halpern Computer Science Dept. Cornell University halpern@cs.cornell.edu Ian Kash Computer Science Dept. Cornell University kash@cs.cornell.edu ABSTRACT A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computers and Society]: Electronic Commerce General Terms Economics, Theory 1. INTRODUCTION A common feature of many online distributed systems is that individuals provide services for each other. Peer-topeer (P2P) networks (such as Kazaa [25] or BitTorrent [3]) have proved popular as mechanisms for file sharing, and applications such as distributed computation and file storage are on the horizon; systems such as Seti@home [24] provide computational assistance; systems such as Slashdot [21] provide content, evaluations, and advice forums in which people answer each other``s questions. Having individuals provide each other with service typically increases the social welfare: the individual utilizing the resources of the system derives a greater benefit from it than the cost to the individual providing it. However, the cost of providing service can still be nontrivial. For example, users of Kazaa and BitTorrent may be charged for bandwidth usage; in addition, in some filesharing systems, there is the possibility of being sued, which can be viewed as part of the cost. Thus, in many systems there is a strong incentive to become a free rider and benefit from the system without contributing to it. This is not merely a theoretical problem; studies of the Gnutella [22] network have shown that almost 70 percent of users share no files and nearly 50 percent of responses are from the top 1 percent of sharing hosts [1]. Having relatively few users provide most of the service creates a point of centralization; the disappearance of a small percentage of users can greatly impair the functionality of the system. Moreover, current trends seem to be leading to the elimination of the altruistic users on which these systems rely. These heavy users are some of the most expensive customers ISPs have. Thus, as the amount of traffic has grown, ISPs have begun to seek ways to reduce this traffic. Some universities have started charging students for excessive bandwidth usage; others revoke network access for it [5]. A number of companies have also formed whose service is to detect excessive bandwidth usage [19]. These trends make developing a system that encourages a more equal distribution of the work critical for the continued viability of P2P networks and other distributed online systems. A significant amount of research has gone into designing reputation systems to give preferential treatment to users who are sharing files. Some of the P2P networks currently in use have implemented versions of these techniques. However, these approaches tend to fall into one of two categories: either they are barter-like or reputational. By barter-like, we mean that each agent bases its decisions only on information it has derived from its own interactions. Perhaps the best-known example of a barter-like system is BitTorrent, where clients downloading a file try to find other clients with parts they are missing so that they can trade, thus creating a roughly equal amount of work. Since the barter is restricted to users currently interested in a single file, this works well for popular files, but tends to have problems maintaining availability of less popular ones. An example of a barter-like system built on top of a more traditional file-sharing system is the credit system used by eMule 140 [8]. Each user tracks his history of interactions with other users and gives priority to those he has downloaded from in the past. However, in a large system, the probability that a pair of randomly-chosen users will have interacted before is quite small, so this interaction history will not be terribly helpful. Anagnostakis and Greenwald [2] present a more sophisticated version of this approach, but it still seems to suffer from similar problems. A number of attempts have been made at providing general reputation systems (e.g. [12, 13, 17, 27]). The basic idea is to aggregate each user``s experience into a global number for each individual that intuitively represents the system``s view of that individual``s reputation. However, these attempts tend to suffer from practical problems because they implicitly view users as either good or bad, assume that the good users will act according to the specified protocol, and that there are relatively few bad users. Unfortunately, if there are easy ways to game the system, once this information becomes widely available, rational users are likely to make use of it. We cannot count on only a few users being bad (in the sense of not following the prescribed protocol). For example, Kazaa uses a measure of the ratio of the number of uploads to the number of downloads to identify good and bad users. However, to avoid penalizing new users, they gave new users an average rating. Users discovered that they could use this relatively good rating to free ride for a while and, once it started to get bad, they could delete their stored information and effectively come back as a new user, thus circumventing the system (see [2] for a discussion and [11] for a formal analysis of this whitewashing). Thus Kazaa``s reputation system is ineffective. This is a simple case of a more general vulnerability of such systems to sybil attacks [6], where a single user maintains multiple identities and uses them in a coordinated fashion to get better service than he otherwise would. Recent work has shown that most common reputation systems are vulnerable (in the worst case)to such attacks [4]; however, the degree of this vulnerability is still unclear. The analyses of the practical vulnerabilities and the existence of such systems that are immune to such attacks remains an area of active research (e.g., [4, 28, 14]). Simple economic systems based on a scrip or money seem to avoid many of these problems, are easy to implement and are quite popular (see, e.g., [13, 15, 26]). However, they have a different set of problems. Perhaps the most common involve determining the amount of money in the system. Roughly speaking, if there is too little money in the system relative to the number of agents, then relatively few users can afford to make request. On the other hand, if there is too much money, then users will not feel the need to respond to a request; they have enough money already. A related problem involves handling newcomers. If newcomers are each given a positive amount of money, then the system is open to sybil attacks. Perhaps not surprisingly, scrip systems end up having to deal with standard economic woes such as inflation, bubbles, and crashes [26]. In this paper, we provide a formal model in which to analyze scrip systems. We describe a simple scrip system and show that, under reasonable assumptions, for each fixed amount of money there is a nontrivial Nash equilibrium involving threshold strategies, where an agent accepts a request if he has less than $k for some threshold k.1 An interesting aspect of our analysis is that, in equilibrium, the distribution of users with each amount of money is the distribution that maximizes entropy (subject to the money supply constraint). This allows us to compute the money supply that maximizes efficiency (social welfare), given the number of agents. It also leads to a solution for the problem of dealing with newcomers: we simply assume that new users come in with no money, and adjust the price of service (which is equivalent to adjusting the money supply) to maintain the ratio that maximizes efficiency. While assuming that new users come in with no money will not work in all settings, we believe the approach will be widely applicable. In systems where the goal is to do work, new users can acquire money by performing work. It should also work in Kazaalike system where a user can come in with some resources (e.g., a private collection of MP3s). The rest of the paper is organized as follows. In Section 2, we present our formal model and observe that it can be used to understand the effect of altruists. In Section 3, we examine what happens in the game under nonstrategic play, if all agents use the same threshold strategy. We show that, in this case, the system quickly converges to a situation where the distribution of money is characterized by maximum entropy. Using this analysis, we show in Section 4 that, under minimal assumptions, there is a nontrivial Nash equilibrium in the game where all agents use some threshold strategy. Moreover, we show in Section 5 that the analysis leads to an understanding of how to choose the amount of money in the system (or, equivalently, the cost to fulfill a request) so as to maximize efficiency, and also shows how to handle new users. In Section 6, we discuss the extent to which our approach can handle sybils and collusion. We conclude in Section 7. 2. THE MODEL To begin, we formalize providing service in a P2P network as a non-cooperative game. Unlike much of the modeling in this area, our model will model the asymmetric interactions in a file sharing system in which the matching of players (those requesting a file with those who have that particular file) is a key part of the system. This is in contrast with much previous work which uses random matching in a prisoner``s dilemma. Such models were studied in the economics literature [18, 7] and first applied to online reputations in [11]; an application to P2P is found in [9]. This random-matching model fails to capture some salient aspects of a number of important settings. When a request is made, there are typically many people in the network who can potentially satisfy it (especially in a large P2P network), but not all can. For example, some people may not have the time or resources to satisfy the request. The randommatching process ignores the fact that some people may not be able to satisfy the request. Presumably, if the person matched with the requester could not satisfy the match, he would have to defect. Moreover, it does not capture the fact that the decision as to whether to volunteer to satisfy the request should be made before the matching process, not after. That is, the matching process does not capture 1 Although we refer to our unit of scrip as the dollar, these are not real dollars nor do we view them as convertible to dollars. 141 the fact that if someone is unwilling to satisfy the request, there will doubtless be others who can satisfy it. Finally, the actions and payoffs in the prisoner``s dilemma game do not obviously correspond to actual choices that can be made. For example, it is not clear what defection on the part of the requester means. In our model we try to deal with all these issues. Suppose that there are n agents. At each round, an agent is picked uniformly at random to make a request. Each other agent is able to satisfy this request with probability β > 0 at all times, independent of previous behavior. The term β is intended to capture the probability that an agent is busy, or does not have the resources to fulfill the request. Assuming that β is time-independent does not capture the intution that being an unable to fulfill a request at time t may well be correlated with being unable to fulfill it at time t+1. We believe that, in large systems, we should be able to drop the independence assumption, but we leave this for future work. In any case, those agents that are able to satisfy the request must choose whether or not to volunteer to satisfy it. If at least one agent volunteers, the requester gets a benefit of 1 util (the job is done) and one of volunteers is chosen at random to fulfill the request. The agent that fulfills the request pays a cost of α < 1. As is standard in the literature, we assume that agents discount future payoffs by a factor of δ per time unit. This captures the intuition that a util now is worth more than a util tomorrow, and allows us to compute the total utility derived by an agent in an infinite game. Lastly, we assume that with more players requests come more often. Thus we assume that the time between rounds is 1/n. This captures the fact that the systems we want to model are really processing many requests in parallel, so we would expect the number of concurrent requests to be proportional to the number of users.2 Let G(n, δ, α, β) denote this game with n agents, a discount factor of δ, a cost to satisfy requests of α, and a probability of being able to satisfy requests of β. When the latter two parameters are not relevant, we sometimes write G(n, δ). We use the following notation throughout the paper: • pt denotes the agent chosen in round t. • Bt i ∈ {0, 1} denotes whether agent i can satisfy the request in round t. Bt i = 1 with probability β > 0 and Bt i is independent of Bt i for all t = t. • V t i ∈ {0, 1} denotes agent i``s decision about whether to volunteer in round t; 1 indicates volunteering. V t i is determined by agent i``s strategy. • vt ∈ {j | V t j Bt j = 1} denotes the agent chosen to satisfy the request. This agent is chosen uniformly at random from those who are willing (V t j = 1) and able (Bt j = 1) to satisfy the request. • ut i denotes agent i``s utility in round t. A standard agent is one whose utility is determined as discussed in the introduction; namely, the agent gets 2 For large n, our model converges to one in which players make requests in real time, and the time between a player``s requests are exponentially distributed with mean 1. In addition, the time between requests served by a single player is also exponentially distributed. a utility of 1 for a fulfilled request and utility −α for fulfilling a request. Thus, if i is a standard agent, then ut i = 8 < : 1 if i = pt and P j=i V t j Bt j > 0 −α if i = vt 0 otherwise. • Ui = P∞ t=0 δt/n ut i denotes the total utility for agent i. It is the discounted total of agent i``s utility in each round. Note that the effective discount factor is δ1/n since an increase in n leads to a shortening of the time between rounds. Now that we have a model of making and satisfying requests, we use it to analyze free riding. Take an altruist to be someone who always fulfills requests. Agent i might rationally behave altruistically if agent i``s utility function has the following form, for some α > 0: ut i = 8 < : 1 if i = pt and P j=i V t j Bt j > 0 α if i = vt 0 otherwise. Thus, rather than suffering a loss of utility when satisfying a request, an agent derives positive utility from satisfying it. Such a utility function is a reasonable representation of the pleasure that some people get from the sense that they provide the music that everyone is playing. For such altruistic agents, playing the strategy that sets V t i = 1 for all t is dominant. While having a nonstandard utility function might be one reason that a rational agent might use this strategy, there are certainly others. For example a naive user of filesharing software with a good connection might well follow this strategy. All that matters for the following discussion is that there are some agents that use this strategy, for whatever reason. As we have observed, such users seem to exist in some large systems. Suppose that our system has a altruists. Intuitively, if a is moderately large, they will manage to satisfy most of the requests in the system even if other agents do no work. Thus, there is little incentive for any other agent to volunteer, because he is already getting full advantage of participating in the system. Based on this intuition, it is a relatively straightforward calculation to determine a value of a that depends only on α, β, and δ, but not the number n of players in the system, such that the dominant strategy for all standard agents i is to never volunteer to satisfy any requests (i.e., V t i = 0 for all t). Proposition 2.1. There exists an a that depends only on α, β, and δ such that, in G(n, δ, α, β) with at least a altruists, not volunteering in every round is a dominant strategy for all standard agents. Proof. Consider the strategy for a standard player j in the presence of a altruists. Even with no money, player j will get a request satisfied with probability 1 − (1 − β)a just through the actions of these altruists. Thus, even if j is chosen to make a request in every round, the most additional expected utility he can hope to gain by having money isP∞ k=1(1 − β)a δk = (1 − β)a /(1 − δ). If (1 − β)a /(1 − δ) > α or, equivalently, if a > log1−β(α(1 − δ)), never volunteering is a dominant strategy. Consider the following reasonable values for our parameters: β = .01 (so that each player can satisfy 1% of the requests), α = .1 (a low but non-negligible cost), δ = .9999/day 142 (which corresponds to a yearly discount factor of approximately 0.95), and an average of 1 request per day per player. Then we only need a > 1145. While this is a large number, it is small relative to the size of a large P2P network. Current systems all have a pool of users behaving like our altruists. This means that attempts to add a reputation system on top of an existing P2P system to influence users to cooperate will have no effect on rational users. To have a fair distribution of work, these systems must be fundamentally redesigned to eliminate the pool of altruistic users. In some sense, this is not a problem at all. In a system with altruists, the altruists are presumably happy, as are the standard agents, who get almost all their requests satisfied without having to do any work. Indeed, current P2P network work quite well in terms of distributing content to people. However, as we said in the introduction, there is some reason to believe these altruists may not be around forever. Thus, it is worth looking at what can be done to make these systems work in their absence. For the rest of this paper we assume that all agents are standard, and try to maximize expected utility. We are interested in equilibria based on a scrip system. Each time an agent has a request satisfied he must pay the person who satisfied it some amount. For now, we assume that the payment is fixed; for simplicity, we take the amount to be $1. We denote by M the total amount of money in the system. We assume that M > 0 (otherwise no one will ever be able to get paid). In principle, agents are free to adopt a very wide variety of strategies. They can make decisions based on the names of other agents or use a strategy that is heavily history dependant, and mix these strategies freely. To aid our analysis, we would like to be able to restrict our attention to a simpler class of strategies. The class of strategies we are interested in is easy to motivate. The intuitive reason for wanting to earn money is to cater for the possibility that an agent will run out before he has a chance to earn more. On the other hand, a rational agent with plenty of mone would not want to work, because by the time he has managed to spend all his money, the util will have less value than the present cost of working. The natural balance between these two is a threshold strategy. Let Sk be the strategy where an agent volunteers whenever he has less than k dollars and not otherwise. Note that S0 is the strategy where the agent never volunteers. While everyone playing S0 is a Nash equilibrium (nobody can do better by volunteering if no one else is willing to), it is an uninteresting one. As we will show in Section 4, it is sufficient to restrict our attention to this class of strategies. We use Kt i to denote the amount of money agent i has at time t. Clearly Kt+1 i = Kt i unless agent i has a request satisfied, in which case Kt+1 i = Kt+1 i − 1 or agent i fulfills a request, in which case Kt+1 i = Kt+1 i + 1. Formally, Kt+1 i = 8 < : Kt i − 1 if i = pt , P j=i V t j Bt j > 0, and Kt i > 0 Kt i + 1 if i = vt and Kt pt > 0 Kt i otherwise. The threshold strategy Sk is the strategy such that V t i = 1 if Kt pt > 0 and Kt i < k 0 otherwise. 3. THE GAME UNDER NONSTRATEGIC PLAY Before we consider strategic play, we examine what happens in the system if everyone just plays the same strategy Sk. Our overall goal is to show that there is some distribution over money (i.e., the fraction of people with each amount of money) such that the system converges to this distribution in a sense to be made precise shortly. Suppose that everyone plays Sk. For simplicity, assume that everyone has at most k dollars. We can make this assumption with essentially no loss of generality, since if someone has more than k dollars, he will just spend money until he has at most k dollars. After this point he will never acquire more than k. Thus, eventually the system will be in such a state. If M ≥ kn, no agent will ever be willing to work. Thus, for the purposes of this section we assume that M < kn. From the perspective of a single agent, in (stochastic) equilibrium, the agent is undergoing a random walk. However, the parameters of this random walk depend on the random walks of the other agents and it is quite complicated to solve directly. Thus we consider an alternative analysis based on the evolution of the system as a whole. If everyone has at most k dollars, then the amount of money that an agent has is an element of {0, ... , k}. If there are n agents, then the state of the game can be described by identifying how much money each agent has, so we can represent it by an element of Sk,n = {0, ... , k}{1,...,n} . Since the total amount of money is constant, not all of these states can arise in the game. For example the state where each player has $0 is impossible to reach in any game with money in the system. Let mS(s) = P i∈{1...n} s(i) denote the total mount of money in the game at state s, where s(i) is the number of dollars that agent i has in state s. We want to consider only those states where the total money in the system is M, namely Sk,n,M = {s ∈ Sk,n | mS(s) = M}. Under the assumption that all agents use strategy Sk, the evolution of the system can be treated as a Markov chain Mk,n,M over the state space Sk,n,M . It is possible to move from one state to another in a single round if by choosing a particular agent to make a request and a particular agent to satisfy it, the amounts of money possesed by each agent become those in the second state. Therefore the probability of a transition from a state s to t is 0 unless there exist two agents i and j such that s(i ) = t(i ) for all i /∈ {i, j}, t(i) = s(i) + 1, and t(j) = s(j) − 1. In this case the probability of transitioning from s to t is the probability of j being chosen to spend a dollar and has someone willing and able to satisfy his request ((1/n)(1 − (1 − β)|{i |s(i )=k}|−Ij ) multiplied by the probability of i being chosen to satisfy his request (1/(|({i | s(i ) = k}| − Ij )). Ij is 0 if j has k dollars and 1 otherwise (it is just a correction for the fact that j cannot satisfy his own request.) Let ∆k denote the set of probability distributions on {0, ... , k}. We can think of an element of ∆k as describing the fraction of people with each amount of money. This is a useful way of looking at the system, since we typically don``t care who has each amount of money, but just the fraction of people that have each amount. As before, not all elements of ∆k are possible, given our constraint that the total amount of 143 money is M. Rather than thinking in terms of the total amount of money in the system, it will prove more useful to think in terms of the average amount of money each player has. Of course, the total amount of money in a system with n agents is M iff the average amount that each player has is m = M/n. Let ∆k m denote all distributions d ∈ ∆k such that E(d) = m (i.e., Pk j=0 d(j)j = m). Given a state s ∈ Sk,n,M , let ds ∈ ∆k m denote the distribution of money in s. Our goal is to show that, if n is large, then there is a distribution d∗ ∈ ∆k m such that, with high probability, the Markov chain Mk,n,M will almost always be in a state s such that ds is close to d∗ . Thus, agents can base their decisions about what strategy to use on the assumption that they will be in such a state. We can in fact completely characterize the distribution d∗ . Given a distribution d ∈ ∆k , let H(d) = − X {j:d(j)=0} d(j) log(d(j)) denote the entropy of d. If ∆ is a closed convex set of distributions, then it is well known that there is a unique distribution in ∆ at which the entropy function takes its maximum value in ∆. Since ∆k m is easily seen to be a closed convex set of distributions, it follows that there is a unique distribution in ∆k m that we denote d∗ k,m whose entropy is greater than that of all other distributions in ∆k m. We now show that, for n sufficiently large, the Markov chain Mk,n,M is almost surely in a state s such that ds is close to d∗ k,M/n. The statement is correct under a number of senses of close. For definiteness, we consider the Euclidean distance. Given > 0, let Sk,n,m, denote the set of states s in Sk,n,mn such that Pk j=0 |ds (j) − d∗ k,m|2 < . Given a Markov chain M over a state space S and S ⊆ S, let Xt,s,S be the random variable that denotes that M is in a state of S at time t, when started in state s. Theorem 3.1. For all > 0, all k, and all m, there exists n such that for all n > n and all states s ∈ Sk,n,mn, there exists a time t∗ (which may depend on k, n, m, and ) such that for t > t∗ , we have Pr(Xt,s,Sk,n,m, ) > 1 − . Proof. (Sketch) Suppose that at some time t, Pr(Xt,s,s ) is uniform for all s . Then the probability of being in a set of states is just the size of the set divided by the total number of states. A standard technique from statistical mechanics is to show that there is a concentration phenomenon around the maximum entropy distribution [16]. More precisely, using a straightforward combinatorial argument, it can be shown that the fraction of states not in Sk,n,m, is bounded by p(n)/ecn , where p is a polynomial. This fraction clearly goes to 0 as n gets large. Thus, for sufficiently large n, Pr(Xt,s,Sk,n,m, ) > 1 − if Pr(Xt,s,s ) is uniform. It is relatively straightforward to show that our Markov Chain has a limit distribution π over Sk,n,mn, such that for all s, s ∈ Sk,n,mn, limt→∞ Pr(Xt,s,s ) = πs . Let Pij denote the probability of transitioning from state i to state j. It is easily verified by an explicit computation of the transition probabilities that Pij = Pji for all states i and j. It immediatly follows from this symmetry that πs = πs , so π is uniform. After a sufficient amount of time, the distribution will be close enough to π, that the probabilities are again bounded by constant, which is sufficient to complete the theorem. 0 0.002 0.004 0.006 0.008 0.01 Euclidean Distance 2000 2500 3000 3500 4000 NumberofSteps Figure 1: Distance from maximum-entropy distribution with 1000 agents. 5000 10000 15000 20000 25000 Number of Agents 0.001 0.002 0.003 0.004 0.005 MaximumDistance Figure 2: Maximum distance from maximumentropy distribution over 106 timesteps. 0 5000 10000 15000 20000 25000 Number of Agents 0 20000 40000 60000 TimetoDistance.001 Figure 3: Average time to get within .001 of the maximum-entropy distribution. 144 We performed a number of experiments that show that the maximum entropy behavior described in Theorem 3.1 arises quickly for quite practical values of n and t. The first experiment showed that, even if n = 1000, we reach the maximum-entropy distribution quickly. We averaged 10 runs of the Markov chain for k = 5 where there is enough money for each agent to have $2 starting from a very extreme distribution (every agent has either $0 or $5) and considered the average time needed to come within various distances of the maximum entropy distribution. As Figure 1 shows, after 2,000 steps, on average, the Euclidean distance from the average distribution of money to the maximum-entropy distribution is .008; after 3,000 steps, the distance is down to .001. Note that this is really only 3 real time units since with 1000 players we have 1000 transactions per time unit. We then considered how close the distribution stays to the maximum entropy distribution once it has reached it. To simplify things, we started the system in a state whose distribution was very close to the maximum-entropy distribution and ran it for 106 steps, for various values of n. As Figure 2 shows, the system does not move far from the maximum-entropy distribution once it is there. For example, if n = 5000, the system is never more than distance .001 from the maximum-entropy distribution; if n = 25, 000, it is never more than .0002 from the maximum-entropy distribution. Finally, we considered how more carefully how quickly the system converges to the maximum-entropy distribution for various values of n. There are approximately kn possible states, so the convergence time could in principle be quite large. However, we suspect that the Markov chain that arises here is rapidly mixing, which means that it will converge significantly faster (see [20] for more details about rapid mixing). We believe that the actually time needed is O(n). This behavior is illustrated in Figure 3, which shows that for our example chain (again averaged over 10 runs), after 3n steps, the Euclidean distance between the actual distribution of money in the system and the maximum-entropy distribution is less than .001. 4. THE GAME UNDER STRATEGIC PLAY We have seen that the system is well behaved if the agents all follow a threshold strategy; we now want to show that there is a nontrivial Nash equilibrium where they do so (that is, a Nash equilibrium where all the agents use Sk for some k > 0.) This is not true in general. If δ is small, then agents have no incentive to work. Intuitively, if future utility is sufficiently discounted, then all that matters is the present, and there is no point in volunteering to work. With small δ, S0 is the only equilibrium. However, we show that for δ sufficiently large, there is another equilibrium in threshold strategies. We do this by first showing that, if every other agent is playing a threshold strategy, then there is a best response that is also a threshold strategy (although not necessarily the same one). We then show that there must be some (mixed) threshold strategy for which this best response is the same strategy. It follows that this tuple of threshold strategies is a Nash equilibrium. As a first step, we show that, for all k, if everyone other than agent i is playing Sk, then there is a threshold strategy Sk that is a best response for agent i. To prove this, we need to assume that the system is close to the steadystate distribution (i.e., the maximum-entropy distribution). However, as long as δ is sufficiently close to 1, we can ignore what happens during the period that the system is not in steady state.3 We have thus far considered threshold strategies of the form Sk, where k is a natural number; this is a discrete set of strategies. For a later proof, it will be helpful to have a continuous set of strategies. If γ = k + γ , where k is a natural number and 0 ≤ γ < 1, let Sγ be the strategy that performs Sk with probability 1 − γ and Sk+1 with probability γ. (Note that we are not considering arbitrary mixed threshold strategies here, but rather just mixing between adjacent strategies for the sole purpose of making out strategies continuous in a natural way.) Theorem 3.1 applies to strategies Sγ (the same proof goes through without change), where γ is an arbitrary nonnegative real number. Theorem 4.1. Fix a strategy Sγ and an agent i. There exists δ∗ < 1 and n∗ such that if δ > δ∗ , n > n∗ , and every agent other than i is playing Sγ in game G(n, δ), then there is an integer k such that the best response for agent i is Sk . Either k is unique (that is, there is a unique best response that is also a threshold strategy), or there exists an integer k such that Sγ is a best response for agent i for all γ in the interval [k , k +1] (and these are the only best responses among threshold strategies). Proof. (Sketch:-RRB- If δ is sufficiently large, we can ignore what happens before the system converges to the maximumentropy distribution. If n is sufficiently large, then the strategy played by one agent will not affect the distribution of money significantly. Thus, the probability of i moving from one state (dollar amount) to another depends only on i``s strategy (since we can take the probability that i will be chosen to make a request and the probability that i will be chosen to satisfy a request to be constant). Thus, from i``s point of view, the system is a Markov decision process (MDP), and i needs to compute the optimal policy (strategy) for this MDP. It follows from standard results [23, Theorem 6.11.6] that there is an optimal policy that is a threshold policy. The argument that the best response is either unique or there is an interval of best responses follows from a more careful analysis of the value function for the MDP. We remark that there may be best responses that are not threshold strategies. All that Theorem 4.1 shows is that, among best responses, there is at least one that is a threshold strategy. Since we know that there is a best response that is a threshold strategy, we can look for a Nash equilibrium in the space of threshold strategies. Theorem 4.2. For all M, there exists δ∗ < 1 and n∗ such that if δ > δ∗ and n > n∗ , there exists a Nash equilibrium in the game G(n, δ) where all agents play Sγ for some integer γ > 0. Proof. It follows easily from the proof Theorem 4.1 that if br(δ, γ) is the minimal best response threshold strategy if all the other agents are playing Sγ and the discount factor is δ then, for fixed δ, br(δ, ·) is a step function. It also follows 3 Formally, we need to define the strategies when the system is far from equilibrium. However, these far from (stochastic) equilibrium strategies will not affect the equilibrium behavior when n is large and deviations from stochastic equilibrium are extremely rare. 145 from the theorem that if there are two best responses, then a mixture of them is also a best response. Therefore, if we can join the steps by a vertical line, we get a best-response curve. It is easy to see that everywhere that this bestresponse curve crosses the diagonal y = x defines a Nash equilibrium where all agents are using the same threshold strategy. As we have already observed, one such equilibrium occurs at 0. If there are only $M in the system, we can restrict to threshold strategies Sk where k ≤ M + 1. Since no one can have more than $M, all strategies Sk for k > M are equivalent to SM ; these are just the strategies where the agent always volunteers in response to request made by someone who can pay. Clearly br(δ, SM ) ≤ M for all δ, so the best response function is at or below the equilibrium at M. If k ≤ M/n, every player will have at least k dollars and so will be unwilling to work and the best response is just 0. Consider k∗ , the smallest k such that k > M/n. It is not hard to show that for k∗ there exists a δ∗ such that for all δ ≥ δ∗ , br(δ, k∗ ) ≥ k∗ . It follows by continuity that if δ ≥ δ∗ , there must be some γ such that br(δ, γ) = γ. This is the desired Nash equilibrium. This argument also shows us that we cannot in general expect fixed points to be unique. If br(δ, k∗ ) = k∗ and br(δ, k + 1) > k + 1 then our argument shows there must be a second fixed point. In general there may be multiple fixed points even when br(δ, k∗ ) > k∗ , as illustrated in the Figure 4 with n = 1000 and M = 3000. 0 5 10 15 20 25 Strategy of Rest of Agents 0 5 10 15 20 25 BestResponse Figure 4: The best response function for n = 1000 and M = 3000. Theorem 4.2 allows us to restrict our design to agents using threshold strategies with the confidence that there will be a nontrivial equilibrium. However, it does not rule out the possibility that there may be other equilibria that do not involve threshold stratgies. It is even possible (although it seems unlikely) that some of these equilibria might be better. 5. SOCIAL WELFARE AND SCALABITY Our theorems show that for each value of M and n, for sufficiently large δ, there is a nontrivial Nash equilibrium where all the agents use some threshold strategy Sγ(M,n). From the point of view of the system designer, not all equilibria are equally good; we want an equilibrium where as few as possible agents have $0 when they get a chance to make a request (so that they can pay for the request) and relatively few agents have more than the threshold amount of money (so that there are always plenty of agents to fulfill the request). There is a tension between these objectives. It is not hard to show that as the fraction of agents with $0 increases in the maximum entropy distribution, the fraction of agents with the maximum amount of money decreases. Thus, our goal is to understand what the optimal amount of money should be in the system, given the number of agents. That is, we want to know the amount of money M that maximizes efficiency, i.e., the total expected utility if all the agents use Sγ(M,n). 4 We first observe that the most efficient equilibrium depends only on the ratio of M to n, not on the actual values of M and n. Theorem 5.1. There exists n∗ such that for all games G(n1, δ) and G(n2, δ) where n1, n2 > n∗ , if M1/n1 = M2/n2, then Sγ(M1,n1) = Sγ(M2,n2). Proof. Fix M/n = r. Theorem 3.1 shows that the maximum-entropy distribution depends only on k and the ratio M/n, not on M and n separately. Thus, given r, for each choice of k, there is a unique maximum entropy distribution dk,r. The best response br(δ, k) depends only on the distribution dk,r, not M or n. Thus, the Nash equilibrium depends only on the ratio r. That is, for all choices of M and n such that n is sufficiently large (so that Theorem 3.1 applies) and M/n = r, the equilibrium strategies are the same. In light of Theorem 5.1, the system designer should ensure that there is enough money M in the system so that the ratio between M/n is optimal. We are currently exploring exactly what the optimal ratio is. As our very preliminary results for β = 1 show in Figure 5, the ratio appears to be monotone increasing in δ, which matches the intuition that we should provide more patient agents with the opportunity to save more money. Additionally, it appears to be relatively smooth, which suggests that it may have a nice analytic solution. 0.9 0.91 0.92 0.93 0.94 0.95 Discount Rate ∆ 5 5.5 6 6.5 7 OptimalRatioofMn Figure 5: Optimal average amount of money to the nearest .25 for β = 1 We remark that, in practice, it may be easier for the designer to vary the price of fulfilling a request rather than 4 If there are multiple equilibria, we take Sγ(M,n) to be the Nash equilibrium that has highest efficiency for fixed M and n. 146 injecting money in the system. This produces the same effect. For example, changing the cost of fulfilling a request from $1 to $2 is equivalent to halving the amount of money that each agent has. Similarly, halving the the cost of fulfilling a request is equivalent to doubling the amount of money that everyone has. With a fixed amount of money M, there is an optimal product nc of the number of agents and the cost c of fulfilling a request. Theorem 5.1 also tells us how to deal with a dynamic pool of agents. Our system can handle newcomers relatively easily: simply allow them to join with no money. This gives existing agents no incentive to leave and rejoin as newcomers. We then change the price of fulfilling a request so that the optimal ratio is maintained. This method has the nice feature that it can be implemented in a distributed fashion; if all nodes in the system have a good estimate of n then they can all adjust prices automatically. (Alternatively, the number of agents in the system can be posted in a public place.) Approaches that rely on adjusting the amount of money may require expensive system-wide computations (see [26] for an example), and must be carefully tuned to avoid creating incentives for agents to manipulate the system by which this is done. Note that, in principle, the realization that the cost of fulfilling a request can change can affect an agent``s strategy. For example, if an agent expects the cost to increase, then he may want to defer volunteering to fulfill a request. However, if the number of agents in the system is always increasing, then the cost always decreases, so there is never any advantage in waiting. There may be an advantage in delaying a request, but it is far more costly, in terms of waiting costs than in providing service, since we assume the need for a service is often subject to real waiting costs, while the need to supply the service is merely to augment a money supply. (Related issues are discussed in [10].) We ultimately hope to modify the mechanism so that the price of a job can be set endogenously within the system (as in real-world economies), with agents bidding for jobs rather than there being a fixed cost set externally. However, we have not yet explored the changes required to implement this change. Thus, for now, we assume that the cost is set as a function of the number of agents in the system (and that there is no possibility for agents to satisfy a request for less than the official cost or for requesters to offer to pay more than it). 6. SYBILS AND COLLUSION In a naive sense, our system is essentially sybil-proof. To get d dollars, his sybils together still have to perform d units of work. Moreover, since newcomers enter the system with $0, there is no benefit to creating new agents simply to take advantage of an initial endowment. Nevertheless, there are some less direct ways that an agent could take advantage of sybils. First, by having more identities he will have a greater probability of getting chosen to make a request. It is easy to see that this will lead to the agent having higher total utility. However, this is just an artifact of our model. To make our system simple to analyze, we have assumed that request opportunities came uniformly at random. In practice, requests are made to satisfy a desire. Our model implicitly assumed that all agents are equally likely to have a desire at any particular time. Having sybils should not increase the need to have a request satisfied. Indeed, it would be reasonable to assume that sybils do not make requests at all. Second, having sybils makes it more likely that one of the sybils will be chosen to fulfill a request. This can allow a user to increase his utility by setting a lower threshold; that is, to use a strategy Sk where k is smaller than the k used by the Nash equilibrium strategy. Intuitively, the need for money is not as critical if money is easier to obtain. Unlike the first concern, this seems like a real issue. It seems reasonable to believe that when people make a decision between a number of nodes to satisfy a request they do so at random, at least to some extent. Even if they look for advertised node features to help make this decision, sybils would allow a user to advertise a wide range of features. Third, an agent can drive down the cost of fulfilling a request by introducing many sybils. Similarly, he could increase the cost (and thus the value of his money) by making a number of sybils leave the system. Concievably he could alternate between these techniques to magnify the effects of work he does. We have not yet calculated the exact effect of this change (it interacts with the other two effects of having sybils that we have already noted). Given the number of sybils that would be needed to cause a real change in the perceived size of a large P2P network, the practicality of this attack depends heavily on how much sybils cost an attacker and what resources he has available. The second point raised regarding sybils also applies to collusion if we allow money to be loaned. If k agents collude, they can agree that, if one runs out of money, another in the group will loan him money. By pooling their money in this way, the k agents can again do better by setting a higher threshold. Note that the loan mechanism doesn``t need to be built into the system; the agents can simply use a fake transaction to transfer the money. These appear to be the main avenues for collusive attacks, but we are still exploring this issue. 7. CONCLUSION We have given a formal analysis of a scrip system and have shown that the existence of a Nash equilibrium where all agents use a threshold strategy. Moreover, we can compute efficiency of equilibrium strategy and optimize the price (or money supply) to maximize efficiency. Thus, our analysis provides a formal mechanisms for solving some important problems in implementing scrip systems. It tells us that with a fixed population of rational users, such systems are very unlikely to become unstable. Thus if this stability is common belief among the agents we would not expect inflation, bubbles, or crashes because of agent speculation. However, we cannot rule out the possibility that that agents may have other beliefs that will cause them to speculate. Our analysis also tells us how to scale the system to handle an influx of new users without introducing these problems: scale the money supply to keep the average amount of money constant (or equivalently adjust prices to achieve the same goal). There are a number of theoretical issues that are still open, including a characterization of the multiplicity of equilibria - are there usually 2? In addition, we expect that one should be able to compute analytic estimates for the best response function and optimal pricing which would allow us to understand the relationship between pricing and various parameters in the model. 147 It would also be of great interest to extend our analysis to handle more realistic settings. We mention a few possible extensions here: • We have assumed that the world is homogeneous in a number of ways, including request frequency, utility, and ability to satisfy requests. It would be interesting to examine how relaxing any of these assumptions would alter our results. • We have assumed that there is no cost to an agent to be a member of the system. Suppose instead that we imposed a small cost simply for being present in the system to reflect the costs of routing messages and overlay maintainance. This modification could have a significant impact on sybil attacks. • We have described a scrip system that works when there are no altruists and have shown that no system can work once there there are sufficiently many altruists. What happens between these extremes? • One type of irrational behavior encountered with scrip systems is hoarding. There are some similarities between hoarding and altruistic behavior. While an altruist provide service for everyone, a hoarder will volunteer for all jobs (in order to get more money) and rarely request service (so as not to spend money). It would be interesting to investigate the extent to which our system is robust against hoarders. Clearly with too many hoarders, there may not be enough money remaining among the non-hoarders to guarantee that, typically, a non-hoarder would have enough money to satisfy a request. • Finally, in P2P filesharing systems, there are overlapping communities of various sizes that are significantly more likely to be able to satisfy each other``s requests. It would be interesting to investigate the effect of such communities on the equilibrium of our system. There are also a number of implementation issues that would have to be resolved in a real system. For example, we need to worry about the possibility of agents counterfeiting money or lying about whether service was actually provided. Karma [26] provdes techniques for dealing with both of these issues and a number of others, but some of Karma``s implementation decisions point to problems for our model. For example, it is prohibitively expensive to ensure that bank account balances can never go negative, a fact that our model does not capture. Another example is that Karma has nodes serve as bookkeepers for other nodes account balances. Like maintaining a presence in the network, this imposes a cost on the node, but unlike that, responsibility it can be easily shirked. Karma suggests several ways to incentivize nodes to perform these duties. We have not investigated whether these mechanisms be incorporated without disturbing our equilibrium. 8. ACKNOWLEDGEMENTS We would like to thank Emin Gun Sirer, Shane Henderson, Jon Kleinberg, and 3 anonymous referees for helpful suggestions. EF, IK and JH are supported in part by NSF under grant ITR-0325453. JH is also supported in part by NSF under grants CTC-0208535 and IIS-0534064, by ONR under grant N00014-01-10-511, by the DoD Multidisciplinary University Research Initiative (MURI) program administered by the ONR under grants N00014-01-1-0795 and N00014-04-1-0725, and by AFOSR under grant F49620-021-0101. 9. REFERENCES [1] E. Adar and B. A. Huberman. Free riding on Gnutella. First Monday, 5(10), 2000. [2] K. G. Anagnostakis and M. Greenwald. Exchange-based incentive mechanisms for peer-to-peer file sharing. In International Conference on Distributed Computing Systems (ICDCS), pages 524-533, 2004. [3] BitTorrent Inc.. BitTorrent web site. http://www.bittorent.com. [4] A. Cheng and E. Friedman. Sybilproof reputation mechanisms. In Workshop on Economics of Peer-to-Peer Systems (P2PECON), pages 128-132, 2005. [5] Cornell Information Technologies. Cornell``s ccommodity internet usage statistics. http://www.cit.cornell.edu/computer/students/ bandwidth/charts. html. [6] J. R. Douceur. The sybil attack. In International Workshop on Peer-to-Peer Systems (IPTPS), pages 251-260, 2002. [7] G. Ellison. Cooperation in the prisoner``s dilemma with anonymous random matching. Review of Economic Studies, 61:567-588, 1994. [8] eMule Project. eMule web site. http://www.emule-project.net/. [9] M. Feldman, K. Lai, I. Stoica, and J. Chuang. Robust incentive techniques for peer-to-peer networks. In ACM Conference on Electronic Commerce (EC), pages 102-111, 2004. [10] E. J. Friedman and D. C. Parkes. Pricing wifi at starbucks: issues in online mechanism design. In EC ``03: Proceedings of the 4th ACM Conference on Electronic Commerce, pages 240-241. ACM Press, 2003. [11] E. J. Friedman and P. Resnick. The social cost of cheap pseudonyms. Journal of Economics and Management Strategy, 10(2):173-199, 2001. [12] R. Guha, R. Kumar, P. Raghavan, and A. Tomkins. Propagation of trust and distrust. In Conference on the World Wide Web(WWW), pages 403-412, 2004. [13] M. Gupta, P. Judge, and M. H. Ammar. A reputation system for peer-to-peer networks. In Network and Operating System Support for Digital Audio and Video(NOSSDAV), pages 144-152, 2003. [14] Z. Gyongi, P. Berkhin, H. Garcia-Molina, and J. Pedersen. Link spam detection based on mass estimation. Technical report, Stanford University, 2005. [15] J. Ioannidis, S. Ioannidis, A. D. Keromytis, and V. Prevelakis. Fileteller: Paying and getting paid for file storage. In Financial Cryptography, pages 282-299, 2002. [16] E. T. Jaynes. Where do we stand on maximum entropy? In R. D. Levine and M. Tribus, editors, The Maximum Entropy Formalism, pages 15-118. MIT Press, Cambridge, Mass., 1978. 148 [17] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina. The Eigentrust algorithm for reputation management in P2P networks. In Conference on the World Wide Web (WWW), pages 640-651, 2003. [18] M. Kandori. Social norms and community enforcement. Review of Economic Studies, 59:63-80, 1992. [19] LogiSense Corporation. LogiSense web site. http://www.logisense.com/tm p2p.html. [20] L. Lovasz and P. Winkler. Mixing of random walks and other diffusions on a graph. In Surveys in Combinatorics, 1993, Walker (Ed.) , London Mathematical Society Lecture Note Series 187, Cambridge University Press. 1995. [21] Open Source Technology Group. Slashdot FAQcomments and moderation. http://slashdot.org/faq/com-mod.shtml#cm700. [22] OSMB LLC. Gnutella web site. http://www.gnutella.com/. [23] M. L. Puterman. Markov Decision Processes. Wiley, 1994. [24] SETI@home. SETI@home web page. http://setiathome.ssl.berkeley.edu/. [25] Sharman Networks Ltd.. Kazaa web site. http://www.kazaa.com/. [26] V. Vishnumurthy, S. Chandrakumar, and E. Sirer. Karma: A secure economic framework for peer-to-peer resource sharing. In Workshop on Economics of Peer-to-Peer Systems (P2PECON), 2003. [27] L. Xiong and L. Liu. Building trust in decentralized peer-to-peer electronic communities. In Internation Conference on Electronic Commerce Research (ICECR), 2002. [28] H. Zhang, A. Goel, R. Govindan, K. Mason, and B. V. Roy. Making eigenvector-based reputation systems robust to collusion. In Workshop on Algorithms and Models for the Web-Graph(WAW), pages 92-104, 2004. 149
Efficiency and Nash Equilibria in a Scrip System for P2P Networks ABSTRACT A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking. 1. INTRODUCTION A common feature of many online distributed systems is that individuals provide services for each other. Peer-topeer (P2P) networks (such as Kazaa [25] or BitTorrent [3]) have proved popular as mechanisms for file sharing, and applications such as distributed computation and file storage are on the horizon; systems such as Seti@home [24] provide computational assistance; systems such as Slashdot [21] provide content, evaluations, and advice forums in which people answer each other's questions. Having individuals provide each other with service typically increases the social welfare: the individual utilizing the resources of the system derives a greater benefit from it than the cost to the individual providing it. However, the cost of providing service can still be nontrivial. For example, users of Kazaa and BitTorrent may be charged for bandwidth usage; in addition, in some filesharing systems, there is the possibility of being sued, which can be viewed as part of the cost. Thus, in many systems there is a strong incentive to become a free rider and benefit from the system without contributing to it. This is not merely a theoretical problem; studies of the Gnutella [22] network have shown that almost 70 percent of users share no files and nearly 50 percent of responses are from the top 1 percent of sharing hosts [1]. Having relatively few users provide most of the service creates a point of centralization; the disappearance of a small percentage of users can greatly impair the functionality of the system. Moreover, current trends seem to be leading to the elimination of the "altruistic" users on which these systems rely. These heavy users are some of the most expensive customers ISPs have. Thus, as the amount of traffic has grown, ISPs have begun to seek ways to reduce this traffic. Some universities have started charging students for excessive bandwidth usage; others revoke network access for it [5]. A number of companies have also formed whose service is to detect excessive bandwidth usage [19]. These trends make developing a system that encourages a more equal distribution of the work critical for the continued viability of P2P networks and other distributed online systems. A significant amount of research has gone into designing reputation systems to give preferential treatment to users who are sharing files. Some of the P2P networks currently in use have implemented versions of these techniques. However, these approaches tend to fall into one of two categories: either they are "barter-like" or reputational. By barter-like, we mean that each agent bases its decisions only on information it has derived from its own interactions. Perhaps the best-known example of a barter-like system is BitTorrent, where clients downloading a file try to find other clients with parts they are missing so that they can trade, thus creating a roughly equal amount of work. Since the barter is restricted to users currently interested in a single file, this works well for popular files, but tends to have problems maintaining availability of less popular ones. An example of a barter-like system built on top of a more traditional file-sharing system is the credit system used by eMule [8]. Each user tracks his history of interactions with other users and gives priority to those he has downloaded from in the past. However, in a large system, the probability that a pair of randomly-chosen users will have interacted before is quite small, so this interaction history will not be terribly helpful. Anagnostakis and Greenwald [2] present a more sophisticated version of this approach, but it still seems to suffer from similar problems. A number of attempts have been made at providing general reputation systems (e.g. [12, 13, 17, 27]). The basic idea is to aggregate each user's experience into a global number for each individual that intuitively represents the system's view of that individual's reputation. However, these attempts tend to suffer from practical problems because they implicitly view users as either "good" or "bad", assume that the "good" users will act according to the specified protocol, and that there are relatively few "bad" users. Unfortunately, if there are easy ways to game the system, once this information becomes widely available, rational users are likely to make use of it. We cannot count on only a few users being "bad" (in the sense of not following the prescribed protocol). For example, Kazaa uses a measure of the ratio of the number of uploads to the number of downloads to identify good and bad users. However, to avoid penalizing new users, they gave new users an average rating. Users discovered that they could use this relatively good rating to free ride for a while and, once it started to get bad, they could delete their stored information and effectively come back as a "new" user, thus circumventing the system (see [2] for a discussion and [11] for a formal analysis of this "whitewashing"). Thus Kazaa's reputation system is ineffective. This is a simple case of a more general vulnerability of such systems to sybil attacks [6], where a single user maintains multiple identities and uses them in a coordinated fashion to get better service than he otherwise would. Recent work has shown that most common reputation systems are vulnerable (in the worst case) to such attacks [4]; however, the degree of this vulnerability is still unclear. The analyses of the practical vulnerabilities and the existence of such systems that are immune to such attacks remains an area of active research (e.g., [4, 28, 14]). Simple economic systems based on a scrip or money seem to avoid many of these problems, are easy to implement and are quite popular (see, e.g., [13, 15, 26]). However, they have a different set of problems. Perhaps the most common involve determining the amount of money in the system. Roughly speaking, if there is too little money in the system relative to the number of agents, then relatively few users can afford to make request. On the other hand, if there is too much money, then users will not feel the need to respond to a request; they have enough money already. A related problem involves handling newcomers. If newcomers are each given a positive amount of money, then the system is open to sybil attacks. Perhaps not surprisingly, scrip systems end up having to deal with standard economic woes such as inflation, bubbles, and crashes [26]. In this paper, we provide a formal model in which to analyze scrip systems. We describe a simple scrip system and show that, under reasonable assumptions, for each fixed amount of money there is a nontrivial Nash equilibrium involving threshold strategies, where an agent accepts a request if he has less than $k for some threshold k.' An interesting aspect of our analysis is that, in equilibrium, the distribution of users with each amount of money is the distribution that maximizes entropy (subject to the money supply constraint). This allows us to compute the money supply that maximizes efficiency (social welfare), given the number of agents. It also leads to a solution for the problem of dealing with newcomers: we simply assume that new users come in with no money, and adjust the price of service (which is equivalent to adjusting the money supply) to maintain the ratio that maximizes efficiency. While assuming that new users come in with no money will not work in all settings, we believe the approach will be widely applicable. In systems where the goal is to do work, new users can acquire money by performing work. It should also work in Kazaalike system where a user can come in with some resources (e.g., a private collection of MP3s). The rest of the paper is organized as follows. In Section 2, we present our formal model and observe that it can be used to understand the effect of altruists. In Section 3, we examine what happens in the game under nonstrategic play, if all agents use the same threshold strategy. We show that, in this case, the system quickly converges to a situation where the distribution of money is characterized by maximum entropy. Using this analysis, we show in Section 4 that, under minimal assumptions, there is a nontrivial Nash equilibrium in the game where all agents use some threshold strategy. Moreover, we show in Section 5 that the analysis leads to an understanding of how to choose the amount of money in the system (or, equivalently, the cost to fulfill a request) so as to maximize efficiency, and also shows how to handle new users. In Section 6, we discuss the extent to which our approach can handle sybils and collusion. We conclude in Section 7. 2. THE MODEL To begin, we formalize providing service in a P2P network as a non-cooperative game. Unlike much of the modeling in this area, our model will model the asymmetric interactions in a file sharing system in which the matching of players (those requesting a file with those who have that particular file) is a key part of the system. This is in contrast with much previous work which uses random matching in a prisoner's dilemma. Such models were studied in the economics literature [18, 7] and first applied to online reputations in [11]; an application to P2P is found in [9]. This random-matching model fails to capture some salient aspects of a number of important settings. When a request is made, there are typically many people in the network who can potentially satisfy it (especially in a large P2P network), but not all can. For example, some people may not have the time or resources to satisfy the request. The randommatching process ignores the fact that some people may not be able to satisfy the request. Presumably, if the person matched with the requester could not satisfy the match, he would have to defect. Moreover, it does not capture the fact that the decision as to whether to "volunteer" to satisfy the request should be made before the matching process, not after. That is, the matching process does not capture ` Although we refer to our unit of scrip as the dollar, these are not real dollars nor do we view them as convertible to dollars. the fact that if someone is unwilling to satisfy the request, there will doubtless be others who can satisfy it. Finally, the actions and payoffs in the prisoner's dilemma game do not obviously correspond to actual choices that can be made. For example, it is not clear what defection on the part of the requester means. In our model we try to deal with all these issues. Suppose that there are n agents. At each round, an agent is picked uniformly at random to make a request. Each other agent is able to satisfy this request with probability,3> 0 at all times, independent of previous behavior. The term,3 is intended to capture the probability that an agent is busy, or does not have the resources to fulfill the request. Assuming that,3 is time-independent does not capture the intution that being an unable to fulfill a request at time t may well be correlated with being unable to fulfill it at time t + 1. We believe that, in large systems, we should be able to drop the independence assumption, but we leave this for future work. In any case, those agents that are able to satisfy the request must choose whether or not to volunteer to satisfy it. If at least one agent volunteers, the requester gets a benefit of 1 util (the job is done) and one of volunteers is chosen at random to fulfill the request. The agent that fulfills the request pays a cost of α <1. As is standard in the literature, we assume that agents discount future payoffs by a factor of S per time unit. This captures the intuition that a util now is worth more than a util tomorrow, and allows us to compute the total utility derived by an agent in an infinite game. Lastly, we assume that with more players requests come more often. Thus we assume that the time between rounds is 1/n. This captures the fact that the systems we want to model are really processing many requests in parallel, so we would expect the number of concurrent requests to be proportional to the number of users .2 Let G (n, S, α,,3) denote this game with n agents, a discount factor of S, a cost to satisfy requests of α, and a probability of being able to satisfy requests of,3. When the latter two parameters are not relevant, we sometimes write G (n, S). We use the following notation throughout the paper: • pt denotes the agent chosen in round t. • Bti E {0, 11 denotes whether agent i can satisfy the request in round t. Bit = 1 with probability,3> 0 and Bit is independent of Bt' i for all t' = ~ t. • Vit E {0, 11 denotes agent i's decision about whether to volunteer in round t; 1 indicates volunteering. Vit is determined by agent i's strategy. • vt E {j Vjt Btj = 11 denotes the agent chosen to satisfy the request. This agent is chosen uniformly at random from those who are willing (Vjt = 1) and able (Btj = 1) to satisfy the request. • uti denotes agent i's utility in round t. A standard agent is one whose utility is determined as discussed in the introduction; namely, the agent gets 2For large n, our model converges to one in which players make requests in real time, and the time between a player's requests are exponentially distributed with mean 1. In addition, the time between requests served by a single player is also exponentially distributed. a utility of 1 for a fulfilled request and utility − α for fulfilling a request. Thus, if i is a standard agent, then • Ui = P ° t = 0 St/nuti denotes the total utility for agent i. It is the discounted total of agent i's utility in each round. Note that the effective discount factor is S1/n since an increase in n leads to a shortening of the time between rounds. Now that we have a model of making and satisfying requests, we use it to analyze free riding. Take an altruist to be someone who always fulfills requests. Agent i might rationally behave altruistically if agent i's utility function has the following form, for some α'> 0: Thus, rather than suffering a loss of utility when satisfying a request, an agent derives positive utility from satisfying it. Such a utility function is a reasonable representation of the pleasure that some people get from the sense that they provide the music that everyone is playing. For such altruistic agents, playing the strategy that sets Vit = 1 for all t is dominant. While having a nonstandard utility function might be one reason that a rational agent might use this strategy, there are certainly others. For example a naive user of filesharing software with a good connection might well follow this strategy. All that matters for the following discussion is that there are some agents that use this strategy, for whatever reason. As we have observed, such users seem to exist in some large systems. Suppose that our system has a altruists. Intuitively, if a is moderately large, they will manage to satisfy most of the requests in the system even if other agents do no work. Thus, there is little incentive for any other agent to volunteer, because he is already getting full advantage of participating in the system. Based on this intuition, it is a relatively straightforward calculation to determine a value of a that depends only on α,,3, and S, but not the number n of players in the system, such that the dominant strategy for all standard agents i is to never volunteer to satisfy any requests (i.e., Vit = 0 for all t). PROOF. Consider the strategy for a standard player j in the presence of a altruists. Even with no money, player j will get a request satisfied with probability 1 − (1 −,3) a just through the actions of these altruists. Thus, even if j is chosen to make a request in every round, the most additional expected utility he can hope to gain by having money is P ° k = 1 (1 −,3) aSk = (1 −,3) a / (1 − S). If (1 −,3) a / (1 − S)> α or, equivalently, if a> log1--β (α (1 − S)), never volunteering is a dominant strategy. Consider the following reasonable values for our parameters:,3 = .01 (so that each player can satisfy 1% of the requests), α = .1 (a low but non-negligible cost), S = .9999 / day (which corresponds to a yearly discount factor of approximately 0.95), and an average of 1 request per day per player. Then we only need a> 1145. While this is a large number, it is small relative to the size of a large P2P network. Current systems all have a pool of users behaving like our altruists. This means that attempts to add a reputation system on top of an existing P2P system to influence users to cooperate will have no effect on rational users. To have a fair distribution of work, these systems must be fundamentally redesigned to eliminate the pool of altruistic users. In some sense, this is not a problem at all. In a system with altruists, the altruists are presumably happy, as are the standard agents, who get almost all their requests satisfied without having to do any work. Indeed, current P2P network work quite well in terms of distributing content to people. However, as we said in the introduction, there is some reason to believe these altruists may not be around forever. Thus, it is worth looking at what can be done to make these systems work in their absence. For the rest of this paper we assume that all agents are standard, and try to maximize expected utility. We are interested in equilibria based on a scrip system. Each time an agent has a request satisfied he must pay the person who satisfied it some amount. For now, we assume that the payment is fixed; for simplicity, we take the amount to be $1. We denote by M the total amount of money in the system. We assume that M> 0 (otherwise no one will ever be able to get paid). In principle, agents are free to adopt a very wide variety of strategies. They can make decisions based on the names of other agents or use a strategy that is heavily history dependant, and mix these strategies freely. To aid our analysis, we would like to be able to restrict our attention to a simpler class of strategies. The class of strategies we are interested in is easy to motivate. The intuitive reason for wanting to earn money is to cater for the possibility that an agent will run out before he has a chance to earn more. On the other hand, a rational agent with plenty of mone would not want to work, because by the time he has managed to spend all his money, the util will have less value than the present cost of working. The natural balance between these two is a threshold strategy. Let Sk be the strategy where an agent volunteers whenever he has less than k dollars and not otherwise. Note that S0 is the strategy where the agent never volunteers. While everyone playing S0 is a Nash equilibrium (nobody can do better by volunteering if no one else is willing to), it is an uninteresting one. As we will show in Section 4, it is sufficient to restrict our attention to this class of strategies. We use Kti to denote the amount of money agent i has at time t. Clearly Kt +1 i = Kti unless agent i has a request satisfied, in which case Kt +1 i − 1 or agent i fulfills a request, in which case Kt +1 3. THE GAME UNDER NONSTRATEGIC PLAY Before we consider strategic play, we examine what happens in the system if everyone just plays the same strategy Sk. Our overall goal is to show that there is some distribution over money (i.e., the fraction of people with each amount of money) such that the system "converges" to this distribution in a sense to be made precise shortly. Suppose that everyone plays Sk. For simplicity, assume that everyone has at most k dollars. We can make this assumption with essentially no loss of generality, since if someone has more than k dollars, he will just spend money until he has at most k dollars. After this point he will never acquire more than k. Thus, eventually the system will be in such a state. If M ≥ kn, no agent will ever be willing to work. Thus, for the purposes of this section we assume that M <kn. From the perspective of a single agent, in (stochastic) equilibrium, the agent is undergoing a random walk. However, the parameters of this random walk depend on the random walks of the other agents and it is quite complicated to solve directly. Thus we consider an alternative analysis based on the evolution of the system as a whole. If everyone has at most k dollars, then the amount of money that an agent has is an element of {0,..., k}. If there are n agents, then the state of the game can be described by identifying how much money each agent has, so we can represent it by an element of Sk, n = {0,..., k} {1,..., n}. Since the total amount of money is constant, not all of these states can arise in the game. For example the state where each player has $0 is impossible to reach in any game with money in the system. Let mS (s) = Ei ∈ {1...n} s (i) denote the total mount of money in the game at state s, where s (i) is the number of dollars that agent i has in state s. We want to consider only those states where the total money in the system is M, namely Under the assumption that all agents use strategy Sk, the evolution of the system can be treated as a Markov chain Mk, n, M over the state space Sk, n, M. It is possible to move from one state to another in a single round if by choosing a particular agent to make a request and a particular agent to satisfy it, the amounts of money possesed by each agent become those in the second state. Therefore the probability of a transition from a state s to t is 0 unless there exist two agents i and j such that s (i ~) = t (i ~) for all i ~ ∈ / {i, j}, t (i) = s (i) + 1, and t (j) = s (j) − 1. In this case the probability of transitioning from s to t is the probability of j being chosen to spend a dollar and has someone willing and able to satisfy his request ((1/n) (1 − (1 − β) | {i ~ | s (i ~) ~ = k} | − Ij) multiplied by the probability of i being chosen to satisfy his request (1 / (| ({i ~ | s (i ~) = ~ k} | − Ij)). Ij is 0 if j has k dollars and 1 otherwise (it is just a correction for the fact that j cannot satisfy his own request.) Let ∆ k denote the set of probability distributions on {0,..., k}. We can think of an element of ∆ k as describing the fraction of people with each amount of money. This is a useful way of looking at the system, since we typically don't care who has each amount of money, but just the fraction of people that have each amount. As before, not all elements of ∆ k are possible, given our constraint that the total amount of money is M. Rather than thinking in terms of the total Number of Steps 4000 amount of money in the system, it will prove more useful to 3500 think in terms of the average amount of money each player 3000 has. Of course, the total amount of money in a system 2500 with n agents is M iff the average amount that each player 2000 has is m = M/n. Let ∆ km denote all distributions d E ∆ k such that E (d) = m (i.e., Pkj = 0 d (j) j = m). Given a state denote the entropy of d. If ∆ is a closed convex set of distributions, then it is well known that there is a unique distribution in ∆ at which the entropy function takes its maximum value in ∆. Since ∆ km is easily seen to be a closed convex set of distributions, it follows that there is a unique distribution in ∆ km that we denote d ∗ k, m whose entropy is greater than that of all other distributions in ∆ k m. We now show that, for n sufficiently large, the Markov chain Mk, n, M is almost surely in a state s such that ds is close to d ∗ k, M/n. The statement is correct under a number of senses of "close". For definiteness, we consider the Euclidean distance. Given E> 0, let Sk, n, m, ~ denote the set of states s in Sk, n, mn such that Pkj = 0 | ds (j) − d ∗ k, m | 2 <e. Given a Markov chain M over a state space S and S C S, let Xt, s, S be the random variable that denotes that M is in a state of S at time t, when started in state s. PROOF. (Sketch) Suppose that at some time t, Pr (Xt, s, s,) is uniform for all s'. Then the probability of being in a set of states is just the size of the set divided by the total number of states. A standard technique from statistical mechanics is to show that there is a concentration phenomenon around the maximum entropy distribution [16]. More precisely, using a straightforward combinatorial argument, it can be shown that the fraction of states not in Sk, n, m, ~ is bounded by p (n) / ecn, where p is a polynomial. This fraction clearly goes to 0 as n gets large. Thus, for sufficiently large n, Pr (Xt, s, Sk, n, m, J> 1 − e if Pr (Xt, s, s,) is uniform. It is relatively straightforward to show that our Markov Chain has a limit distribution π over Sk, n, mn, such that for all s, s' E Sk, n, mn, limt → ∞ Pr (Xt, s, s,) = πs,. Let Pij denote the probability of transitioning from state i to state j. It is easily verified by an explicit computation of the transition probabilities that Pij = Pji for all states i and j. It immediatly follows from this symmetry that πs = πs,, so π is uniform. After a sufficient amount of time, the distribution will be close enough to π, that the probabilities are again bounded by constant, which is sufficient to complete the theorem. Figure 2: Maximum distance from maximumentropy distribution over 106 timesteps. Figure 3: Average time to get within .001 of the maximum-entropy distribution. We performed a number of experiments that show that the maximum entropy behavior described in Theorem 3.1 arises quickly for quite practical values of n and t. The first experiment showed that, even if n = 1000, we reach the maximum-entropy distribution quickly. We averaged 10 runs of the Markov chain for k = 5 where there is enough money for each agent to have $2 starting from a very extreme distribution (every agent has either $0 or $5) and considered the average time needed to come within various distances of the maximum entropy distribution. As Figure 1 shows, after 2,000 steps, on average, the Euclidean distance from the average distribution of money to the maximum-entropy distribution is .008; after 3,000 steps, the distance is down to .001. Note that this is really only 3 real time units since with 1000 players we have 1000 transactions per time unit. We then considered how close the distribution stays to the maximum entropy distribution once it has reached it. To simplify things, we started the system in a state whose distribution was very close to the maximum-entropy distribution and ran it for 106 steps, for various values of n. As Figure 2 shows, the system does not move far from the maximum-entropy distribution once it is there. For example, if n = 5000, the system is never more than distance .001 from the maximum-entropy distribution; if n = 25, 000, it is never more than .0002 from the maximum-entropy distribution. Finally, we considered how more carefully how quickly the system converges to the maximum-entropy distribution for various values of n. There are approximately kn possible states, so the convergence time could in principle be quite large. However, we suspect that the Markov chain that arises here is rapidly mixing, which means that it will converge significantly faster (see [20] for more details about rapid mixing). We believe that the actually time needed is O (n). This behavior is illustrated in Figure 3, which shows that for our example chain (again averaged over 10 runs), after 3n steps, the Euclidean distance between the actual distribution of money in the system and the maximum-entropy distribution is less than .001. 4. THE GAME UNDER STRATEGIC PLAY We have seen that the system is well behaved if the agents all follow a threshold strategy; we now want to show that there is a nontrivial Nash equilibrium where they do so (that is, a Nash equilibrium where all the agents use Sk for some k> 0.) This is not true in general. If δ is small, then agents have no incentive to work. Intuitively, if future utility is sufficiently discounted, then all that matters is the present, and there is no point in volunteering to work. With small δ, S0 is the only equilibrium. However, we show that for δ sufficiently large, there is another equilibrium in threshold strategies. We do this by first showing that, if every other agent is playing a threshold strategy, then there is a best response that is also a threshold strategy (although not necessarily the same one). We then show that there must be some (mixed) threshold strategy for which this best response is the same strategy. It follows that this tuple of threshold strategies is a Nash equilibrium. As a first step, we show that, for all k, if everyone other than agent i is playing Sk, then there is a threshold strategy Sk that is a best response for agent i. To prove this, we need to assume that the system is close to the steadystate distribution (i.e., the maximum-entropy distribution). However, as long as δ is sufficiently close to 1, we can ignore what happens during the period that the system is not in steady state .3 We have thus far considered threshold strategies of the form Sk, where k is a natural number; this is a discrete set of strategies. For a later proof, it will be helpful to have a continuous set of strategies. If - y = k + - y', where k is a natural number and 0 ≤ - y' <1, let Sγ be the strategy that performs Sk with probability 1 − - y' and Sk +1 with probability - y. (Note that we are not considering arbitrary mixed threshold strategies here, but rather just mixing between adjacent strategies for the sole purpose of making out strategies continuous in a natural way.) Theorem 3.1 applies to strategies Sγ (the same proof goes through without change), where - y is an arbitrary nonnegative real number. PROOF. (Sketch:-RRB- If δ is sufficiently large, we can ignore what happens before the system converges to the maximumentropy distribution. If n is sufficiently large, then the strategy played by one agent will not affect the distribution of money significantly. Thus, the probability of i moving from one state (dollar amount) to another depends only on i's strategy (since we can take the probability that i will be chosen to make a request and the probability that i will be chosen to satisfy a request to be constant). Thus, from i's point of view, the system is a Markov decision process (MDP), and i needs to compute the optimal policy (strategy) for this MDP. It follows from standard results [23, Theorem 6.11.6] that there is an optimal policy that is a threshold policy. The argument that the best response is either unique or there is an interval of best responses follows from a more careful analysis of the value function for the MDP. We remark that there may be best responses that are not threshold strategies. All that Theorem 4.1 shows is that, among best responses, there is at least one that is a threshold strategy. Since we know that there is a best response that is a threshold strategy, we can look for a Nash equilibrium in the space of threshold strategies. PROOF. It follows easily from the proof Theorem 4.1 that if br (δ, - y) is the minimal best response threshold strategy if all the other agents are playing Sγ and the discount factor is δ then, for fixed δ, br (δ, ·) is a step function. It also follows 3Formally, we need to define the strategies when the system is far from equilibrium. However, these far from (stochastic) equilibrium strategies will not affect the equilibrium behavior when n is large and deviations from stochastic equilibrium are extremely rare. from the theorem that if there are two best responses, then a mixture of them is also a best response. Therefore, if we can join the "steps" by a vertical line, we get a best-response curve. It is easy to see that everywhere that this bestresponse curve crosses the diagonal y = x defines a Nash equilibrium where all agents are using the same threshold strategy. As we have already observed, one such equilibrium occurs at 0. If there are only $M in the system, we can restrict to threshold strategies Sk where k <M + 1. Since no one can have more than $M, all strategies Sk for k> M are equivalent to SM; these are just the strategies where the agent always volunteers in response to request made by someone who can pay. Clearly br (δ, SM) <M for all δ, so the best response function is at or below the equilibrium at M. If k <M/n, every player will have at least k dollars and so will be unwilling to work and the best response is just 0. Consider k *, the smallest k such that k> M/n. It is not hard to show that for k * there exists a δ * such that for all δ> δ *, br (δ, k *)> k *. It follows by continuity that if δ> δ *, there must be some - y such that br (δ, - y) = - y. This is the desired Nash equilibrium. This argument also shows us that we cannot in general expect fixed points to be unique. If br (δ, k *) = k * and br (δ, k + 1)> k + 1 then our argument shows there must be a second fixed point. In general there may be multiple fixed points even when br (δ, k *)> k *, as illustrated in the Figure 4 with n = 1000 and M = 3000. Figure 4: The best response function for n = 1000 and M = 3000. Theorem 4.2 allows us to restrict our design to agents using threshold strategies with the confidence that there will be a nontrivial equilibrium. However, it does not rule out the possibility that there may be other equilibria that do not involve threshold stratgies. It is even possible (although it seems unlikely) that some of these equilibria might be better. 5. SOCIAL WELFARE AND SCALABITY Our theorems show that for each value of M and n, for sufficiently large δ, there is a nontrivial Nash equilibrium where all the agents use some threshold strategy Sγ (M, n). From the point of view of the system designer, not all equilibria are equally good; we want an equilibrium where as few as possible agents have $0 when they get a chance to make a request (so that they can pay for the request) and relatively few agents have more than the threshold amount of money (so that there are always plenty of agents to fulfill the request). There is a tension between these objectives. It is not hard to show that as the fraction of agents with $0 increases in the maximum entropy distribution, the fraction of agents with the maximum amount of money decreases. Thus, our goal is to understand what the optimal amount of money should be in the system, given the number of agents. That is, we want to know the amount of money M that maximizes efficiency, i.e., the total expected utility if all the agents use Sγ (M, n). 4 We first observe that the most efficient equilibrium depends only on the ratio of M to n, not on the actual values of M and n. PROOF. Fix M/n = r. Theorem 3.1 shows that the maximum-entropy distribution depends only on k and the ratio M/n, not on M and n separately. Thus, given r, for each choice of k, there is a unique maximum entropy distribution dk, r. The best response br (δ, k) depends only on the distribution dk, r, not M or n. Thus, the Nash equilibrium depends only on the ratio r. That is, for all choices of M and n such that n is sufficiently large (so that Theorem 3.1 applies) and M/n = r, the equilibrium strategies are the same. In light of Theorem 5.1, the system designer should ensure that there is enough money M in the system so that the ratio between M/n is optimal. We are currently exploring exactly what the optimal ratio is. As our very preliminary results for 3 = 1 show in Figure 5, the ratio appears to be monotone increasing in δ, which matches the intuition that we should provide more patient agents with the opportunity to save more money. Additionally, it appears to be relatively smooth, which suggests that it may have a nice analytic solution. Figure 5: Optimal average amount of money to the nearest .25 for 3 = 1 We remark that, in practice, it may be easier for the designer to vary the price of fulfilling a request rather than 4If there are multiple equilibria, we take Sγ (M, n) to be the Nash equilibrium that has highest efficiency for fixed M and n. injecting money in the system. This produces the same effect. For example, changing the cost of fulfilling a request from $1 to $2 is equivalent to halving the amount of money that each agent has. Similarly, halving the the cost of fulfilling a request is equivalent to doubling the amount of money that everyone has. With a fixed amount of money M, there is an optimal product nc of the number of agents and the cost c of fulfilling a request. Theorem 5.1 also tells us how to deal with a dynamic pool of agents. Our system can handle newcomers relatively easily: simply allow them to join with no money. This gives existing agents no incentive to leave and rejoin as newcomers. We then change the price of fulfilling a request so that the optimal ratio is maintained. This method has the nice feature that it can be implemented in a distributed fashion; if all nodes in the system have a good estimate of n then they can all adjust prices automatically. (Alternatively, the number of agents in the system can be posted in a public place.) Approaches that rely on adjusting the amount of money may require expensive system-wide computations (see [26] for an example), and must be carefully tuned to avoid creating incentives for agents to manipulate the system by which this is done. Note that, in principle, the realization that the cost of fulfilling a request can change can affect an agent's strategy. For example, if an agent expects the cost to increase, then he may want to defer volunteering to fulfill a request. However, if the number of agents in the system is always increasing, then the cost always decreases, so there is never any advantage in waiting. There may be an advantage in delaying a request, but it is far more costly, in terms of waiting costs than in providing service, since we assume the need for a service is often subject to real waiting costs, while the need to supply the service is merely to augment a money supply. (Related issues are discussed in [10].) We ultimately hope to modify the mechanism so that the price of a job can be set endogenously within the system (as in real-world economies), with agents bidding for jobs rather than there being a fixed cost set externally. However, we have not yet explored the changes required to implement this change. Thus, for now, we assume that the cost is set as a function of the number of agents in the system (and that there is no possibility for agents to satisfy a request for less than the "official" cost or for requesters to offer to pay more than it). 6. SYBILS AND COLLUSION In a naive sense, our system is essentially sybil-proof. To get d dollars, his sybils together still have to perform d units of work. Moreover, since newcomers enter the system with $0, there is no benefit to creating new agents simply to take advantage of an initial endowment. Nevertheless, there are some less direct ways that an agent could take advantage of sybils. First, by having more identities he will have a greater probability of getting chosen to make a request. It is easy to see that this will lead to the agent having higher total utility. However, this is just an artifact of our model. To make our system simple to analyze, we have assumed that request opportunities came uniformly at random. In practice, requests are made to satisfy a desire. Our model implicitly assumed that all agents are equally likely to have a desire at any particular time. Having sybils should not increase the need to have a request satisfied. Indeed, it would be reasonable to assume that sybils do not make requests at all. Second, having sybils makes it more likely that one of the sybils will be chosen to fulfill a request. This can allow a user to increase his utility by setting a lower threshold; that is, to use a strategy Sk where k' is smaller than the k used by the Nash equilibrium strategy. Intuitively, the need for money is not as critical if money is easier to obtain. Unlike the first concern, this seems like a real issue. It seems reasonable to believe that when people make a decision between a number of nodes to satisfy a request they do so at random, at least to some extent. Even if they look for advertised node features to help make this decision, sybils would allow a user to advertise a wide range of features. Third, an agent can drive down the cost of fulfilling a request by introducing many sybils. Similarly, he could increase the cost (and thus the value of his money) by making a number of sybils leave the system. Concievably he could alternate between these techniques to magnify the effects of work he does. We have not yet calculated the exact effect of this change (it interacts with the other two effects of having sybils that we have already noted). Given the number of sybils that would be needed to cause a real change in the perceived size of a large P2P network, the practicality of this attack depends heavily on how much sybils cost an attacker and what resources he has available. The second point raised regarding sybils also applies to collusion if we allow money to be "loaned". If k agents collude, they can agree that, if one runs out of money, another in the group will loan him money. By pooling their money in this way, the k agents can again do better by setting a higher threshold. Note that the "loan" mechanism doesn't need to be built into the system; the agents can simply use a "fake" transaction to transfer the money. These appear to be the main avenues for collusive attacks, but we are still exploring this issue. 7. CONCLUSION We have given a formal analysis of a scrip system and have shown that the existence of a Nash equilibrium where all agents use a threshold strategy. Moreover, we can compute efficiency of equilibrium strategy and optimize the price (or money supply) to maximize efficiency. Thus, our analysis provides a formal mechanisms for solving some important problems in implementing scrip systems. It tells us that with a fixed population of rational users, such systems are very unlikely to become unstable. Thus if this stability is common belief among the agents we would not expect inflation, bubbles, or crashes because of agent speculation. However, we cannot rule out the possibility that that agents may have other beliefs that will cause them to speculate. Our analysis also tells us how to scale the system to handle an influx of new users without introducing these problems: scale the money supply to keep the average amount of money constant (or equivalently adjust prices to achieve the same goal). There are a number of theoretical issues that are still open, including a characterization of the multiplicity of equilibria--are there usually 2? In addition, we expect that one should be able to compute analytic estimates for the best response function and optimal pricing which would allow us to understand the relationship between pricing and various parameters in the model. It would also be of great interest to extend our analysis to handle more realistic settings. We mention a few possible extensions here: 9 We have assumed that the world is homogeneous in a number of ways, including request frequency, utility, and ability to satisfy requests. It would be interesting to examine how relaxing any of these assumptions would alter our results. 9 We have assumed that there is no cost to an agent to be a member of the system. Suppose instead that we imposed a small cost simply for being present in the system to reflect the costs of routing messages and overlay maintainance. This modification could have a significant impact on sybil attacks. 9 We have described a scrip system that works when there are no altruists and have shown that no system can work once there there are sufficiently many altruists. What happens between these extremes? 9 One type of "irrational" behavior encountered with scrip systems is hoarding. There are some similarities between hoarding and altruistic behavior. While an altruist provide service for everyone, a hoarder will volunteer for all jobs (in order to get more money) and rarely request service (so as not to spend money). It would be interesting to investigate the extent to which our system is robust against hoarders. Clearly with too many hoarders, there may not be enough money remaining among the non-hoarders to guarantee that, typically, a non-hoarder would have enough money to satisfy a request. 9 Finally, in P2P filesharing systems, there are overlapping communities of various sizes that are significantly more likely to be able to satisfy each other's requests. It would be interesting to investigate the effect of such communities on the equilibrium of our system. There are also a number of implementation issues that would have to be resolved in a real system. For example, we need to worry about the possibility of agents counterfeiting money or lying about whether service was actually provided. Karma [26] provdes techniques for dealing with both of these issues and a number of others, but some of Karma's implementation decisions point to problems for our model. For example, it is prohibitively expensive to ensure that bank account balances can never go negative, a fact that our model does not capture. Another example is that Karma has nodes serve as bookkeepers for other nodes account balances. Like maintaining a presence in the network, this imposes a cost on the node, but unlike that, responsibility it can be easily shirked. Karma suggests several ways to incentivize nodes to perform these duties. We have not investigated whether these mechanisms be incorporated without disturbing our equilibrium.
Efficiency and Nash Equilibria in a Scrip System for P2P Networks ABSTRACT A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking. 1. INTRODUCTION A common feature of many online distributed systems is that individuals provide services for each other. Peer-topeer (P2P) networks (such as Kazaa [25] or BitTorrent [3]) have proved popular as mechanisms for file sharing, and applications such as distributed computation and file storage are on the horizon; systems such as Seti@home [24] provide computational assistance; systems such as Slashdot [21] provide content, evaluations, and advice forums in which people answer each other's questions. Having individuals provide each other with service typically increases the social welfare: the individual utilizing the resources of the system derives a greater benefit from it than the cost to the individual providing it. However, the cost of providing service can still be nontrivial. For example, users of Kazaa and BitTorrent may be charged for bandwidth usage; in addition, in some filesharing systems, there is the possibility of being sued, which can be viewed as part of the cost. Thus, in many systems there is a strong incentive to become a free rider and benefit from the system without contributing to it. This is not merely a theoretical problem; studies of the Gnutella [22] network have shown that almost 70 percent of users share no files and nearly 50 percent of responses are from the top 1 percent of sharing hosts [1]. Having relatively few users provide most of the service creates a point of centralization; the disappearance of a small percentage of users can greatly impair the functionality of the system. Moreover, current trends seem to be leading to the elimination of the "altruistic" users on which these systems rely. These heavy users are some of the most expensive customers ISPs have. Thus, as the amount of traffic has grown, ISPs have begun to seek ways to reduce this traffic. Some universities have started charging students for excessive bandwidth usage; others revoke network access for it [5]. A number of companies have also formed whose service is to detect excessive bandwidth usage [19]. These trends make developing a system that encourages a more equal distribution of the work critical for the continued viability of P2P networks and other distributed online systems. A significant amount of research has gone into designing reputation systems to give preferential treatment to users who are sharing files. Some of the P2P networks currently in use have implemented versions of these techniques. However, these approaches tend to fall into one of two categories: either they are "barter-like" or reputational. By barter-like, we mean that each agent bases its decisions only on information it has derived from its own interactions. Perhaps the best-known example of a barter-like system is BitTorrent, where clients downloading a file try to find other clients with parts they are missing so that they can trade, thus creating a roughly equal amount of work. Since the barter is restricted to users currently interested in a single file, this works well for popular files, but tends to have problems maintaining availability of less popular ones. An example of a barter-like system built on top of a more traditional file-sharing system is the credit system used by eMule [8]. Each user tracks his history of interactions with other users and gives priority to those he has downloaded from in the past. However, in a large system, the probability that a pair of randomly-chosen users will have interacted before is quite small, so this interaction history will not be terribly helpful. Anagnostakis and Greenwald [2] present a more sophisticated version of this approach, but it still seems to suffer from similar problems. A number of attempts have been made at providing general reputation systems (e.g. [12, 13, 17, 27]). The basic idea is to aggregate each user's experience into a global number for each individual that intuitively represents the system's view of that individual's reputation. However, these attempts tend to suffer from practical problems because they implicitly view users as either "good" or "bad", assume that the "good" users will act according to the specified protocol, and that there are relatively few "bad" users. Unfortunately, if there are easy ways to game the system, once this information becomes widely available, rational users are likely to make use of it. We cannot count on only a few users being "bad" (in the sense of not following the prescribed protocol). For example, Kazaa uses a measure of the ratio of the number of uploads to the number of downloads to identify good and bad users. However, to avoid penalizing new users, they gave new users an average rating. Users discovered that they could use this relatively good rating to free ride for a while and, once it started to get bad, they could delete their stored information and effectively come back as a "new" user, thus circumventing the system (see [2] for a discussion and [11] for a formal analysis of this "whitewashing"). Thus Kazaa's reputation system is ineffective. This is a simple case of a more general vulnerability of such systems to sybil attacks [6], where a single user maintains multiple identities and uses them in a coordinated fashion to get better service than he otherwise would. Recent work has shown that most common reputation systems are vulnerable (in the worst case) to such attacks [4]; however, the degree of this vulnerability is still unclear. The analyses of the practical vulnerabilities and the existence of such systems that are immune to such attacks remains an area of active research (e.g., [4, 28, 14]). Simple economic systems based on a scrip or money seem to avoid many of these problems, are easy to implement and are quite popular (see, e.g., [13, 15, 26]). However, they have a different set of problems. Perhaps the most common involve determining the amount of money in the system. Roughly speaking, if there is too little money in the system relative to the number of agents, then relatively few users can afford to make request. On the other hand, if there is too much money, then users will not feel the need to respond to a request; they have enough money already. A related problem involves handling newcomers. If newcomers are each given a positive amount of money, then the system is open to sybil attacks. Perhaps not surprisingly, scrip systems end up having to deal with standard economic woes such as inflation, bubbles, and crashes [26]. In this paper, we provide a formal model in which to analyze scrip systems. We describe a simple scrip system and show that, under reasonable assumptions, for each fixed amount of money there is a nontrivial Nash equilibrium involving threshold strategies, where an agent accepts a request if he has less than $k for some threshold k.' An interesting aspect of our analysis is that, in equilibrium, the distribution of users with each amount of money is the distribution that maximizes entropy (subject to the money supply constraint). This allows us to compute the money supply that maximizes efficiency (social welfare), given the number of agents. It also leads to a solution for the problem of dealing with newcomers: we simply assume that new users come in with no money, and adjust the price of service (which is equivalent to adjusting the money supply) to maintain the ratio that maximizes efficiency. While assuming that new users come in with no money will not work in all settings, we believe the approach will be widely applicable. In systems where the goal is to do work, new users can acquire money by performing work. It should also work in Kazaalike system where a user can come in with some resources (e.g., a private collection of MP3s). The rest of the paper is organized as follows. In Section 2, we present our formal model and observe that it can be used to understand the effect of altruists. In Section 3, we examine what happens in the game under nonstrategic play, if all agents use the same threshold strategy. We show that, in this case, the system quickly converges to a situation where the distribution of money is characterized by maximum entropy. Using this analysis, we show in Section 4 that, under minimal assumptions, there is a nontrivial Nash equilibrium in the game where all agents use some threshold strategy. Moreover, we show in Section 5 that the analysis leads to an understanding of how to choose the amount of money in the system (or, equivalently, the cost to fulfill a request) so as to maximize efficiency, and also shows how to handle new users. In Section 6, we discuss the extent to which our approach can handle sybils and collusion. We conclude in Section 7. 2. THE MODEL 3. THE GAME UNDER NONSTRATEGIC PLAY 4. THE GAME UNDER STRATEGIC PLAY 5. SOCIAL WELFARE AND SCALABITY 6. SYBILS AND COLLUSION 7. CONCLUSION We have given a formal analysis of a scrip system and have shown that the existence of a Nash equilibrium where all agents use a threshold strategy. Moreover, we can compute efficiency of equilibrium strategy and optimize the price (or money supply) to maximize efficiency. Thus, our analysis provides a formal mechanisms for solving some important problems in implementing scrip systems. It tells us that with a fixed population of rational users, such systems are very unlikely to become unstable. Thus if this stability is common belief among the agents we would not expect inflation, bubbles, or crashes because of agent speculation. However, we cannot rule out the possibility that that agents may have other beliefs that will cause them to speculate. Our analysis also tells us how to scale the system to handle an influx of new users without introducing these problems: scale the money supply to keep the average amount of money constant (or equivalently adjust prices to achieve the same goal). There are a number of theoretical issues that are still open, including a characterization of the multiplicity of equilibria--are there usually 2? In addition, we expect that one should be able to compute analytic estimates for the best response function and optimal pricing which would allow us to understand the relationship between pricing and various parameters in the model. It would also be of great interest to extend our analysis to handle more realistic settings. We mention a few possible extensions here: 9 We have assumed that the world is homogeneous in a number of ways, including request frequency, utility, and ability to satisfy requests. It would be interesting to examine how relaxing any of these assumptions would alter our results. 9 We have assumed that there is no cost to an agent to be a member of the system. Suppose instead that we imposed a small cost simply for being present in the system to reflect the costs of routing messages and overlay maintainance. This modification could have a significant impact on sybil attacks. 9 We have described a scrip system that works when there are no altruists and have shown that no system can work once there there are sufficiently many altruists. What happens between these extremes? 9 One type of "irrational" behavior encountered with scrip systems is hoarding. There are some similarities between hoarding and altruistic behavior. While an altruist provide service for everyone, a hoarder will volunteer for all jobs (in order to get more money) and rarely request service (so as not to spend money). It would be interesting to investigate the extent to which our system is robust against hoarders. Clearly with too many hoarders, there may not be enough money remaining among the non-hoarders to guarantee that, typically, a non-hoarder would have enough money to satisfy a request. 9 Finally, in P2P filesharing systems, there are overlapping communities of various sizes that are significantly more likely to be able to satisfy each other's requests. It would be interesting to investigate the effect of such communities on the equilibrium of our system. There are also a number of implementation issues that would have to be resolved in a real system. For example, we need to worry about the possibility of agents counterfeiting money or lying about whether service was actually provided. Karma [26] provdes techniques for dealing with both of these issues and a number of others, but some of Karma's implementation decisions point to problems for our model. For example, it is prohibitively expensive to ensure that bank account balances can never go negative, a fact that our model does not capture. Another example is that Karma has nodes serve as bookkeepers for other nodes account balances. Like maintaining a presence in the network, this imposes a cost on the node, but unlike that, responsibility it can be easily shirked. Karma suggests several ways to incentivize nodes to perform these duties. We have not investigated whether these mechanisms be incorporated without disturbing our equilibrium.
Efficiency and Nash Equilibria in a Scrip System for P2P Networks ABSTRACT A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking. 1. INTRODUCTION A common feature of many online distributed systems is that individuals provide services for each other. answer each other's questions. Having individuals provide each other with service typically increases the social welfare: the individual utilizing the resources of the system derives a greater benefit from it than the cost to the individual providing it. However, the cost of providing service can still be nontrivial. For example, users of Kazaa and BitTorrent may be charged for bandwidth usage; in addition, in some filesharing systems, there is the possibility of being sued, which can be viewed as part of the cost. Thus, in many systems there is a strong incentive to become a free rider and benefit from the system without contributing to it. Having relatively few users provide most of the service creates a point of centralization; the disappearance of a small percentage of users can greatly impair the functionality of the system. Moreover, current trends seem to be leading to the elimination of the "altruistic" users on which these systems rely. These heavy users are some of the most expensive customers ISPs have. A number of companies have also formed whose service is to detect excessive bandwidth usage [19]. These trends make developing a system that encourages a more equal distribution of the work critical for the continued viability of P2P networks and other distributed online systems. A significant amount of research has gone into designing reputation systems to give preferential treatment to users who are sharing files. Some of the P2P networks currently in use have implemented versions of these techniques. However, these approaches tend to fall into one of two categories: either they are "barter-like" or reputational. By barter-like, we mean that each agent bases its decisions only on information it has derived from its own interactions. Perhaps the best-known example of a barter-like system is BitTorrent, where clients downloading a file try to find other clients with parts they are missing so that they can trade, thus creating a roughly equal amount of work. Since the barter is restricted to users currently interested in a single file, this works well for popular files, but tends to have problems maintaining availability of less popular ones. An example of a barter-like system built on top of a more traditional file-sharing system is the credit system used by eMule [8]. Each user tracks his history of interactions with other users and gives priority to those he has downloaded from in the past. However, in a large system, the probability that a pair of randomly-chosen users will have interacted before is quite small, so this interaction history will not be terribly helpful. A number of attempts have been made at providing general reputation systems (e.g. [12, 13, 17, 27]). The basic idea is to aggregate each user's experience into a global number for each individual that intuitively represents the system's view of that individual's reputation. Unfortunately, if there are easy ways to game the system, once this information becomes widely available, rational users are likely to make use of it. We cannot count on only a few users being "bad" (in the sense of not following the prescribed protocol). For example, Kazaa uses a measure of the ratio of the number of uploads to the number of downloads to identify good and bad users. However, to avoid penalizing new users, they gave new users an average rating. Thus Kazaa's reputation system is ineffective. This is a simple case of a more general vulnerability of such systems to sybil attacks [6], where a single user maintains multiple identities and uses them in a coordinated fashion to get better service than he otherwise would. Recent work has shown that most common reputation systems are vulnerable (in the worst case) to such attacks [4]; however, the degree of this vulnerability is still unclear. The analyses of the practical vulnerabilities and the existence of such systems that are immune to such attacks remains an area of active research (e.g., [4, 28, 14]). Simple economic systems based on a scrip or money seem to avoid many of these problems, are easy to implement and are quite popular (see, e.g., [13, 15, 26]). However, they have a different set of problems. Perhaps the most common involve determining the amount of money in the system. Roughly speaking, if there is too little money in the system relative to the number of agents, then relatively few users can afford to make request. On the other hand, if there is too much money, then users will not feel the need to respond to a request; they have enough money already. A related problem involves handling newcomers. If newcomers are each given a positive amount of money, then the system is open to sybil attacks. Perhaps not surprisingly, scrip systems end up having to deal with standard economic woes such as inflation, bubbles, and crashes [26]. In this paper, we provide a formal model in which to analyze scrip systems. This allows us to compute the money supply that maximizes efficiency (social welfare), given the number of agents. It also leads to a solution for the problem of dealing with newcomers: we simply assume that new users come in with no money, and adjust the price of service (which is equivalent to adjusting the money supply) to maintain the ratio that maximizes efficiency. While assuming that new users come in with no money will not work in all settings, we believe the approach will be widely applicable. In systems where the goal is to do work, new users can acquire money by performing work. It should also work in Kazaalike system where a user can come in with some resources (e.g., a private collection of MP3s). In Section 2, we present our formal model and observe that it can be used to understand the effect of altruists. In Section 3, we examine what happens in the game under nonstrategic play, if all agents use the same threshold strategy. We show that, in this case, the system quickly converges to a situation where the distribution of money is characterized by maximum entropy. Using this analysis, we show in Section 4 that, under minimal assumptions, there is a nontrivial Nash equilibrium in the game where all agents use some threshold strategy. Moreover, we show in Section 5 that the analysis leads to an understanding of how to choose the amount of money in the system (or, equivalently, the cost to fulfill a request) so as to maximize efficiency, and also shows how to handle new users. In Section 6, we discuss the extent to which our approach can handle sybils and collusion. We conclude in Section 7. 7. CONCLUSION We have given a formal analysis of a scrip system and have shown that the existence of a Nash equilibrium where all agents use a threshold strategy. Moreover, we can compute efficiency of equilibrium strategy and optimize the price (or money supply) to maximize efficiency. Thus, our analysis provides a formal mechanisms for solving some important problems in implementing scrip systems. It tells us that with a fixed population of rational users, such systems are very unlikely to become unstable. Thus if this stability is common belief among the agents we would not expect inflation, bubbles, or crashes because of agent speculation. However, we cannot rule out the possibility that that agents may have other beliefs that will cause them to speculate. Our analysis also tells us how to scale the system to handle an influx of new users without introducing these problems: scale the money supply to keep the average amount of money constant (or equivalently adjust prices to achieve the same goal). There are a number of theoretical issues that are still open, including a characterization of the multiplicity of equilibria--are there usually 2? It would also be of great interest to extend our analysis to handle more realistic settings. It would be interesting to examine how relaxing any of these assumptions would alter our results. 9 We have assumed that there is no cost to an agent to be a member of the system. Suppose instead that we imposed a small cost simply for being present in the system to reflect the costs of routing messages and overlay maintainance. This modification could have a significant impact on sybil attacks. 9 We have described a scrip system that works when there are no altruists and have shown that no system can work once there there are sufficiently many altruists. What happens between these extremes? 9 One type of "irrational" behavior encountered with scrip systems is hoarding. While an altruist provide service for everyone, a hoarder will volunteer for all jobs (in order to get more money) and rarely request service (so as not to spend money). It would be interesting to investigate the extent to which our system is robust against hoarders. Clearly with too many hoarders, there may not be enough money remaining among the non-hoarders to guarantee that, typically, a non-hoarder would have enough money to satisfy a request. 9 Finally, in P2P filesharing systems, there are overlapping communities of various sizes that are significantly more likely to be able to satisfy each other's requests. It would be interesting to investigate the effect of such communities on the equilibrium of our system. There are also a number of implementation issues that would have to be resolved in a real system. For example, we need to worry about the possibility of agents counterfeiting money or lying about whether service was actually provided. Karma [26] provdes techniques for dealing with both of these issues and a number of others, but some of Karma's implementation decisions point to problems for our model. Another example is that Karma has nodes serve as bookkeepers for other nodes account balances. Karma suggests several ways to incentivize nodes to perform these duties. We have not investigated whether these mechanisms be incorporated without disturbing our equilibrium.
I-58
An Efficient Heuristic Approach for Security Against Multiple Adversaries
In adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue. This paper focuses on domains where these threats come from unknown adversaries. These domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games. However, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies. In this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium. Previous work has shown this problem of optimal strategy selection to be NP-hard. Therefore, we present a heuristic called ASAP, with three key advantages to address the problem. First, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game. Second, it provides strategies which are simple to understand, represent, and implement. Third, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form. We provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches.
[ "heurist approach", "adversari multiag domain", "bayesian game", "np-hard", "agent secur via approxim polici", "agent system secur", "bay-nash equilibrium", "bayesian and stackelberg game", "patrol domain", "decomposit for multipl adversari", "mix-integ linear program", "game theori" ]
[ "P", "P", "P", "P", "M", "M", "M", "M", "M", "M", "M", "M" ]
An Efficient Heuristic Approach for Security Against Multiple Adversaries Praveen Paruchuri, Jonathan P. Pearce, Milind Tambe, Fernando Ordonez University of Southern California Los Angeles, CA 90089 {paruchur, jppearce, tambe, fordon}@usc. edu Sarit Kraus Bar-Ilan University Ramat-Gan 52900, Israel sarit@cs.biu.ac.il ABSTRACT In adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue. This paper focuses on domains where these threats come from unknown adversaries. These domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games. However, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies. In this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium. Previous work has shown this problem of optimal strategy selection to be NP-hard. Therefore, we present a heuristic called ASAP, with three key advantages to address the problem. First, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game. Second, it provides strategies which are simple to understand, represent, and implement. Third, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form. We provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches. Categories and Subject Descriptors I.2.11 [Computing Methodologies]: Artificial Intelligence: Distributed Artificial Intelligence - Intelligent Agents General Terms Security, Design, Theory 1. INTRODUCTION In many multiagent domains, agents must act in order to provide security against attacks by adversaries. A common issue that agents face in such security domains is uncertainty about the adversaries they may be facing. For example, a security robot may need to make a choice about which areas to patrol, and how often [16]. However, it will not know in advance exactly where a robber will choose to strike. A team of unmanned aerial vehicles (UAVs) [1] monitoring a region undergoing a humanitarian crisis may also need to choose a patrolling policy. They must make this decision without knowing in advance whether terrorists or other adversaries may be waiting to disrupt the mission at a given location. It may indeed be possible to model the motivations of types of adversaries the agent or agent team is likely to face in order to target these adversaries more closely. However, in both cases, the security robot or UAV team will not know exactly which kinds of adversaries may be active on any given day. A common approach for choosing a policy for agents in such scenarios is to model the scenarios as Bayesian games. A Bayesian game is a game in which agents may belong to one or more types; the type of an agent determines its possible actions and payoffs. The distribution of adversary types that an agent will face may be known or inferred from historical data. Usually, these games are analyzed according to the solution concept of a Bayes-Nash equilibrium, an extension of the Nash equilibrium for Bayesian games. However, in many settings, a Nash or Bayes-Nash equilibrium is not an appropriate solution concept, since it assumes that the agents'' strategies are chosen simultaneously [5]. In some settings, one player can (or must) commit to a strategy before the other players choose their strategies. These scenarios are known as Stackelberg games [6]. In a Stackelberg game, a leader commits to a strategy first, and then a follower (or group of followers) selfishly optimize their own rewards, considering the action chosen by the leader. For example, the security agent (leader) must first commit to a strategy for patrolling various areas. This strategy could be a mixed strategy in order to be unpredictable to the robbers (followers). The robbers, after observing the pattern of patrols over time, can then choose their strategy (which location to rob). Often, the leader in a Stackelberg game can attain a higher reward than if the strategies were chosen simultaneously. To see the advantage of being the leader in a Stackelberg game, consider a simple game with the payoff table as shown in Table 1. The leader is the row player and the follower is the column player. Here, the leader``s payoff is listed first. 1 2 3 1 5,5 0,0 3,10 2 0,0 2,2 5,0 Table 1: Payoff table for example normal form game. The only Nash equilibrium for this game is when the leader plays 2 and the follower plays 2 which gives the leader a payoff of 2. 311 978-81-904262-7-5 (RPS) c 2007 IFAAMAS However, if the leader commits to a uniform mixed strategy of playing 1 and 2 with equal (0.5) probability, the follower``s best response is to play 3 to get an expected payoff of 5 (10 and 0 with equal probability). The leader``s payoff would then be 4 (3 and 5 with equal probability). In this case, the leader now has an incentive to deviate and choose a pure strategy of 2 (to get a payoff of 5). However, this would cause the follower to deviate to strategy 2 as well, resulting in the Nash equilibrium. Thus, by committing to a strategy that is observed by the follower, and by avoiding the temptation to deviate, the leader manages to obtain a reward higher than that of the best Nash equilibrium. The problem of choosing an optimal strategy for the leader to commit to in a Stackelberg game is analyzed in [5] and found to be NP-hard in the case of a Bayesian game with multiple types of followers. Thus, efficient heuristic techniques for choosing highreward strategies in these games is an important open issue. Methods for finding optimal leader strategies for non-Bayesian games [5] can be applied to this problem by converting the Bayesian game into a normal-form game by the Harsanyi transformation [8]. If, on the other hand, we wish to compute the highest-reward Nash equilibrium, new methods using mixed-integer linear programs (MILPs) [17] may be used, since the highest-reward Bayes-Nash equilibrium is equivalent to the corresponding Nash equilibrium in the transformed game. However, by transforming the game, the compact structure of the Bayesian game is lost. In addition, since the Nash equilibrium assumes a simultaneous choice of strategies, the advantages of being the leader are not considered. This paper introduces an efficient heuristic method for approximating the optimal leader strategy for security domains, known as ASAP (Agent Security via Approximate Policies). This method has three key advantages. First, it directly searches for an optimal strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing it to find high-reward non-equilibrium strategies like the one in the above example. Second, it generates policies with a support which can be expressed as a uniform distribution over a multiset of fixed size as proposed in [12]. This allows for policies that are simple to understand and represent [12], as well as a tunable parameter (the size of the multiset) that controls the simplicity of the policy. Third, the method allows for a Bayes-Nash game to be expressed compactly without conversion to a normal-form game, allowing for large speedups over existing Nash methods such as [17] and [11]. The rest of the paper is organized as follows. In Section 2 we fully describe the patrolling domain and its properties. Section 3 introduces the Bayesian game, the Harsanyi transformation, and existing methods for finding an optimal leader``s strategy in a Stackelberg game. Then, in Section 4 the ASAP algorithm is presented for normal-form games, and in Section 5 we show how it can be adapted to the structure of Bayesian games with uncertain adversaries. Experimental results showing higher reward and faster policy computation over existing Nash methods are shown in Section 6, and we conclude with a discussion of related work in Section 7. 2. THE PATROLLING DOMAIN In most security patrolling domains, the security agents (like UAVs [1] or security robots [16]) cannot feasibly patrol all areas all the time. Instead, they must choose a policy by which they patrol various routes at different times, taking into account factors such as the likelihood of crime in different areas, possible targets for crime, and the security agents'' own resources (number of security agents, amount of available time, fuel, etc.). It is usually beneficial for this policy to be nondeterministic so that robbers cannot safely rob certain locations, knowing that they will be safe from the security agents [14]. To demonstrate the utility of our algorithm, we use a simplified version of such a domain, expressed as a game. The most basic version of our game consists of two players: the security agent (the leader) and the robber (the follower) in a world consisting of m houses, 1 ... m. The security agent``s set of pure strategies consists of possible routes of d houses to patrol (in an order). The security agent can choose a mixed strategy so that the robber will be unsure of exactly where the security agent may patrol, but the robber will know the mixed strategy the security agent has chosen. For example, the robber can observe over time how often the security agent patrols each area. With this knowledge, the robber must choose a single house to rob. We assume that the robber generally takes a long time to rob a house. If the house chosen by the robber is not on the security agent``s route, then the robber successfully robs it. Otherwise, if it is on the security agent``s route, then the earlier the house is on the route, the easier it is for the security agent to catch the robber before he finishes robbing it. We model the payoffs for this game with the following variables: • vl,x: value of the goods in house l to the security agent. • vl,q: value of the goods in house l to the robber. • cx: reward to the security agent of catching the robber. • cq: cost to the robber of getting caught. • pl: probability that the security agent can catch the robber at the lth house in the patrol (pl < pl ⇐⇒ l < l). The security agent``s set of possible pure strategies (patrol routes) is denoted by X and includes all d-tuples i =< w1, w2, ..., wd > with w1 ... wd = 1 ... m where no two elements are equal (the agent is not allowed to return to the same house). The robber``s set of possible pure strategies (houses to rob) is denoted by Q and includes all integers j = 1 ... m. The payoffs (security agent, robber) for pure strategies i, j are: • −vl,x, vl,q, for j = l /∈ i. • plcx +(1−pl)(−vl,x), −plcq +(1−pl)(vl,q), for j = l ∈ i. With this structure it is possible to model many different types of robbers who have differing motivations; for example, one robber may have a lower cost of getting caught than another, or may value the goods in the various houses differently. If the distribution of different robber types is known or inferred from historical data, then the game can be modeled as a Bayesian game [6]. 3. BAYESIAN GAMES A Bayesian game contains a set of N agents, and each agent n must be one of a given set of types θn. For our patrolling domain, we have two agents, the security agent and the robber. θ1 is the set of security agent types and θ2 is the set of robber types. Since there is only one type of security agent, θ1 contains only one element. During the game, the robber knows its type but the security agent does not know the robber``s type. For each agent (the security agent or the robber) n, there is a set of strategies σn and a utility function un : θ1 × θ2 × σ1 × σ2 → . A Bayesian game can be transformed into a normal-form game using the Harsanyi transformation [8]. Once this is done, new, linear-program (LP)-based methods for finding high-reward strategies for normal-form games [5] can be used to find a strategy in the transformed game; this strategy can then be used for the Bayesian game. While methods exist for finding Bayes-Nash equilibria directly, without the Harsanyi transformation [10], they find only a 312 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) single equilibrium in the general case, which may not be of high reward. Recent work [17] has led to efficient mixed-integer linear program techniques to find the best Nash equilibrium for a given agent. However, these techniques do require a normal-form game, and so to compare the policies given by ASAP against the optimal policy, as well as against the highest-reward Nash equilibrium, we must apply these techniques to the Harsanyi-transformed matrix. The next two subsections elaborate on how this is done. 3.1 Harsanyi Transformation The first step in solving Bayesian games is to apply the Harsanyi transformation [8] that converts the Bayesian game into a normal form game. Given that the Harsanyi transformation is a standard concept in game theory, we explain it briefly through a simple example in our patrolling domain without introducing the mathematical formulations. Let us assume there are two robber types a and b in the Bayesian game. Robber a will be active with probability α, and robber b will be active with probability 1 − α. The rules described in Section 2 allow us to construct simple payoff tables. Assume that there are two houses in the world (1 and 2) and hence there are two patrol routes (pure strategies) for the agent: {1,2} and {2,1}. The robber can rob either house 1 or house 2 and hence he has two strategies (denoted as 1l, 2l for robber type l). Since there are two types assumed (denoted as a and b), we construct two payoff tables (shown in Table 2) corresponding to the security agent playing a separate game with each of the two robber types with probabilities α and 1 − α. First, consider robber type a. Borrowing the notation from the domain section, we assign the following values to the variables: v1,x = v1,q = 3/4, v2,x = v2,q = 1/4, cx = 1/2, cq = 1, p1 = 1, p2 = 1/2. Using these values we construct a base payoff table as the payoff for the game against robber type a. For example, if the security agent chooses route {1,2} when robber a is active, and robber a chooses house 1, the robber receives a reward of -1 (for being caught) and the agent receives a reward of 0.5 for catching the robber. The payoffs for the game against robber type b are constructed using different values. Security agent: {1,2} {2,1} Robber a 1a -1, .5 -.375, .125 2a -.125, -.125 -1, .5 Robber b 1b -.9, .6 -.275, .225 2b -.025, -.025 -.9, .6 Table 2: Payoff tables: Security Agent vs Robbers a and b Using the Harsanyi technique involves introducing a chance node, that determines the robber``s type, thus transforming the security agent``s incomplete information regarding the robber into imperfect information [3]. The Bayesian equilibrium of the game is then precisely the Nash equilibrium of the imperfect information game. The transformed, normal-form game is shown in Table 3. In the transformed game, the security agent is the column player, and the set of all robber types together is the row player. Suppose that robber type a robs house 1 and robber type b robs house 2, while the security agent chooses patrol {1,2}. Then, the security agent and the robber receive an expected payoff corresponding to their payoffs from the agent encountering robber a at house 1 with probability α and robber b at house 2 with probability 1 − α. 3.2 Finding an Optimal Strategy Although a Nash equilibrium is the standard solution concept for games in which agents choose strategies simultaneously, in our security domain, the security agent (the leader) can gain an advantage by committing to a mixed strategy in advance. Since the followers (the robbers) will know the leader``s strategy, the optimal response for the followers will be a pure strategy. Given the common assumption, taken in [5], in the case where followers are indifferent, they will choose the strategy that benefits the leader, there must exist a guaranteed optimal strategy for the leader [5]. From the Bayesian game in Table 2, we constructed the Harsanyi transformed bimatrix in Table 3. The strategies for each player (security agent or robber) in the transformed game correspond to all combinations of possible strategies taken by each of that player``s types. Therefore, we denote X = σθ1 1 = σ1 and Q = σθ2 2 as the index sets of the security agent and robbers'' pure strategies respectively, with R and C as the corresponding payoff matrices. Rij is the reward of the security agent and Cij is the reward of the robbers when the security agent takes pure strategy i and the robbers take pure strategy j. A mixed strategy for the security agent is a probability distribution over its set of pure strategies and will be represented by a vector x = (px1, px2, ... , px|X|), where pxi ≥ 0 and P pxi = 1. Here, pxi is the probability that the security agent will choose its ith pure strategy. The optimal mixed strategy for the security agent can be found in time polynomial in the number of rows in the normal form game using the following linear program formulation from [5]. For every possible pure strategy j by the follower (the set of all robber types), max P i∈X pxiRij s.t. ∀j ∈ Q, P i∈σ1 pxiCij ≥ P i∈σ1 pxiCij P i∈X pxi = 1 ∀i∈X , pxi >= 0 (1) Then, for all feasible follower strategies j, choose the one that maximizes P i∈X pxiRij, the reward for the security agent (leader). The pxi variables give the optimal strategy for the security agent. Note that while this method is polynomial in the number of rows in the transformed, normal-form game, the number of rows increases exponentially with the number of robber types. Using this method for a Bayesian game thus requires running |σ2||θ2| separate linear programs. This is no surprise, since finding the leader``s optimal strategy in a Bayesian Stackelberg game is NP-hard [5]. 4. HEURISTIC APPROACHES Given that finding the optimal strategy for the leader is NP-hard, we provide a heuristic approach. In this heuristic we limit the possible mixed strategies of the leader to select actions with probabilities that are integer multiples of 1/k for a predetermined integer k. Previous work [14] has shown that strategies with high entropy are beneficial for security applications when opponents'' utilities are completely unknown. In our domain, if utilities are not considered, this method will result in uniform-distribution strategies. One advantage of such strategies is that they are compact to represent (as fractions) and simple to understand; therefore they can be efficiently implemented by real organizations. We aim to maintain the advantage provided by simple strategies for our security application problem, incorporating the effect of the robbers'' rewards on the security agent``s rewards. Thus, the ASAP heuristic will produce strategies which are k-uniform. A mixed strategy is denoted k-uniform if it is a uniform distribution on a multiset S of pure strategies with |S| = k. A multiset is a set whose elements may be repeated multiple times; thus, for example, the mixed strategy corresponding to the multiset {1, 1, 2} would take strategy 1 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 313 {1,2} {2,1} {1a, 1b} −1α − .9(1 − α), .5α + .6(1 − α) −.375α − .275(1 − α), .125α + .225(1 − α) {1a, 2b} −1α − .025(1 − α), .5α − .025(1 − α) −.375α − .9(1 − α), .125α + .6(1 − α) {2a, 1b} −.125α − .9(1 − α), −.125α + .6(1 − α) −1α − .275(1 − α), .5α + .225(1 − α) {2a, 2b} −.125α − .025(1 − α), −.125α − .025(1 − α) −1α − .9(1 − α), .5α + .6(1 − α) Table 3: Harsanyi Transformed Payoff Table with probability 2/3 and strategy 2 with probability 1/3. ASAP allows the size of the multiset to be chosen in order to balance the complexity of the strategy reached with the goal that the identified strategy will yield a high reward. Another advantage of the ASAP heuristic is that it operates directly on the compact Bayesian representation, without requiring the Harsanyi transformation. This is because the different follower (robber) types are independent of each other. Hence, evaluating the leader strategy against a Harsanyi-transformed game matrix is equivalent to evaluating against each of the game matrices for the individual follower types. This independence property is exploited in ASAP to yield a decomposition scheme. Note that the LP method introduced by [5] to compute optimal Stackelberg policies is unlikely to be decomposable into a small number of games as it was shown to be NP-hard for Bayes-Nash problems. Finally, note that ASAP requires the solution of only one optimization problem, rather than solving a series of problems as in the LP method of [5]. For a single follower type, the algorithm works the following way. Given a particular k, for each possible mixed strategy x for the leader that corresponds to a multiset of size k, evaluate the leader``s payoff from x when the follower plays a reward-maximizing pure strategy. We then take the mixed strategy with the highest payoff. We need only to consider the reward-maximizing pure strategies of the followers (robbers), since for a given fixed strategy x of the security agent, each robber type faces a problem with fixed linear rewards. If a mixed strategy is optimal for the robber, then so are all the pure strategies in the support of that mixed strategy. Note also that because we limit the leader``s strategies to take on discrete values, the assumption from Section 3.2 that the followers will break ties in the leader``s favor is not significant, since ties will be unlikely to arise. This is because, in domains where rewards are drawn from any random distribution, the probability of a follower having more than one pure optimal response to a given leader strategy approaches zero, and the leader will have only a finite number of possible mixed strategies. Our approach to characterize the optimal strategy for the security agent makes use of properties of linear programming. We briefly outline these results here for completeness, for detailed discussion and proofs see one of many references on the topic, such as [2]. Every linear programming problem, such as: max cT x | Ax = b, x ≥ 0, has an associated dual linear program, in this case: min bT y | AT y ≥ c. These primal/dual pairs of problems satisfy weak duality: For any x and y primal and dual feasible solutions respectively, cT x ≤ bT y. Thus a pair of feasible solutions is optimal if cT x = bT y, and the problems are said to satisfy strong duality. In fact if a linear program is feasible and has a bounded optimal solution, then the dual is also feasible and there is a pair x∗ , y∗ that satisfies cT x∗ = bT y∗ . These optimal solutions are characterized with the following optimality conditions (as defined in [2]): • primal feasibility: Ax = b, x ≥ 0 • dual feasibility: AT y ≥ c • complementary slackness: xi(AT y − c)i = 0 for all i. Note that this last condition implies that cT x = xT AT y = bT y, which proves optimality for primal dual feasible solutions x and y. In the following subsections, we first define the problem in its most intuititive form as a mixed-integer quadratic program (MIQP), and then show how this problem can be converted into a mixedinteger linear program (MILP). 4.1 Mixed-Integer Quadratic Program We begin with the case of a single type of follower. Let the leader be the row player and the follower the column player. We denote by x the vector of strategies of the leader and q the vector of strategies of the follower. We also denote X and Q the index sets of the leader and follower``s pure strategies, respectively. The payoff matrices R and C correspond to: Rij is the reward of the leader and Cij is the reward of the follower when the leader takes pure strategy i and the follower takes pure strategy j. Let k be the size of the multiset. We first fix the policy of the leader to some k-uniform policy x. The value xi is the number of times pure strategy i is used in the k-uniform policy, which is selected with probability xi/k. We formulate the optimization problem the follower solves to find its optimal response to x as the following linear program: max X j∈Q X i∈X 1 k Cijxi qj s.t. P j∈Q qj = 1 q ≥ 0. (2) The objective function maximizes the follower``s expected reward given x, while the constraints make feasible any mixed strategy q for the follower. The dual to this linear programming problem is the following: min a s.t. a ≥ X i∈X 1 k Cijxi j ∈ Q. (3) From strong duality and complementary slackness we obtain that the follower``s maximum reward value, a, is the value of every pure strategy with qj > 0, that is in the support of the optimal mixed strategy. Therefore each of these pure strategies is optimal. Optimal solutions to the follower``s problem are characterized by linear programming optimality conditions: primal feasibility constraints in (2), dual feasibility constraints in (3), and complementary slackness qj a − X i∈X 1 k Cijxi ! = 0 j ∈ Q. These conditions must be included in the problem solved by the leader in order to consider only best responses by the follower to the k-uniform policy x. 314 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The leader seeks the k-uniform solution x that maximizes its own payoff, given that the follower uses an optimal response q(x). Therefore the leader solves the following integer problem: max X i∈X X j∈Q 1 k Rijq(x)j xi s.t. P i∈X xi = k xi ∈ {0, 1, ... , k}. (4) Problem (4) maximizes the leader``s reward with the follower``s best response (qj for fixed leader``s policy x and hence denoted q(x)j) by selecting a uniform policy from a multiset of constant size k. We complete this problem by including the characterization of q(x) through linear programming optimality conditions. To simplify writing the complementary slackness conditions, we will constrain q(x) to be only optimal pure strategies by just considering integer solutions of q(x). The leader``s problem becomes: maxx,q X i∈X X j∈Q 1 k Rijxiqj s.t. P i xi = kP j∈Q qj = 1 0 ≤ (a − P i∈X 1 k Cijxi) ≤ (1 − qj)M xi ∈ {0, 1, ..., k} qj ∈ {0, 1}. (5) Here, the constant M is some large number. The first and fourth constraints enforce a k-uniform policy for the leader, and the second and fifth constraints enforce a feasible pure strategy for the follower. The third constraint enforces dual feasibility of the follower``s problem (leftmost inequality) and the complementary slackness constraint for an optimal pure strategy q for the follower (rightmost inequality). In fact, since only one pure strategy can be selected by the follower, say qh = 1, this last constraint enforces that a = P i∈X 1 k Cihxi imposing no additional constraint for all other pure strategies which have qj = 0. We conclude this subsection noting that Problem (5) is an integer program with a non-convex quadratic objective in general, as the matrix R need not be positive-semi-definite. Efficient solution methods for non-linear, non-convex integer problems remains a challenging research question. In the next section we show a reformulation of this problem as a linear integer programming problem, for which a number of efficient commercial solvers exist. 4.2 Mixed-Integer Linear Program We can linearize the quadratic program of Problem 5 through the change of variables zij = xiqj, obtaining the following problem maxq,z P i∈X P j∈Q 1 k Rijzij s.t. P i∈X P j∈Q zij = k P j∈Q zij ≤ k kqj ≤ P i∈X zij ≤ k P j∈Q qj = 1 0 ≤ (a − P i∈X 1 k Cij( P h∈Q zih)) ≤ (1 − qj)M zij ∈ {0, 1, ..., k} qj ∈ {0, 1} (6) PROPOSITION 1. Problems (5) and (6) are equivalent. Proof: Consider x, q a feasible solution of (5). We will show that q, zij = xiqj is a feasible solution of (6) of same objective function value. The equivalence of the objective functions, and constraints 4, 6 and 7 of (6) are satisfied by construction. The fact that P j∈Q zij = xi as P j∈Q qj = 1 explains constraints 1, 2, and 5 of (6). Constraint 3 of (6) is satisfied because P i∈X zij = kqj. Let us now consider q, z feasible for (6). We will show that q and xi = P j∈Q zij are feasible for (5) with the same objective value. In fact all constraints of (5) are readily satisfied by construction. To see that the objectives match, notice that if qh = 1 then the third constraint in (6) implies that P i∈X zih = k, which means that zij = 0 for all i ∈ X and all j = h. Therefore, xiqj = X l∈Q zilqj = zihqj = zij. This last equality is because both are 0 when j = h. This shows that the transformation preserves the objective function value, completing the proof. Given this transformation to a mixed-integer linear program (MILP), we now show how we can apply our decomposition technique on the MILP to obtain significant speedups for Bayesian games with multiple follower types. 5. DECOMPOSITION FOR MULTIPLE ADVERSARIES The MILP developed in the previous section handles only one follower. Since our security scenario contains multiple follower (robber) types, we change the response function for the follower from a pure strategy into a weighted combination over various pure follower strategies where the weights are probabilities of occurrence of each of the follower types. 5.1 Decomposed MIQP To admit multiple adversaries in our framework, we modify the notation defined in the previous section to reason about multiple follower types. We denote by x the vector of strategies of the leader and ql the vector of strategies of follower l, with L denoting the index set of follower types. We also denote by X and Q the index sets of leader and follower l``s pure strategies, respectively. We also index the payoff matrices on each follower l, considering the matrices Rl and Cl . Using this modified notation, we characterize the optimal solution of follower l``s problem given the leaders k-uniform policy x, with the following optimality conditions: X j∈Q ql j = 1 al − X i∈X 1 k Cl ijxi ≥ 0 ql j(al − X i∈X 1 k Cl ijxi) = 0 ql j ≥ 0 Again, considering only optimal pure strategies for follower l``s problem we can linearize the complementarity constraint above. We incorporate these constraints on the leader``s problem that selects the optimal k-uniform policy. Therefore, given a priori probabilities pl , with l ∈ L of facing each follower, the leader solves the following problem: The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 315 maxx,q X i∈X X l∈L X j∈Q pl k Rl ijxiql j s.t. P i xi = kP j∈Q ql j = 1 0 ≤ (al − P i∈X 1 k Cl ijxi) ≤ (1 − ql j)M xi ∈ {0, 1, ..., k} ql j ∈ {0, 1}. (7) Problem (7) for a Bayesian game with multiple follower types is indeed equivalent to Problem (5) on the payoff matrix obtained from the Harsanyi transformation of the game. In fact, every pure strategy j in Problem (5) corresponds to a sequence of pure strategies jl, one for each follower l ∈ L. This means that qj = 1 if and only if ql jl = 1 for all l ∈ L. In addition, given the a priori probabilities pl of facing player l, the reward in the Harsanyi transformation payoff table is Rij = P l∈L pl Rl ijl . The same relation holds between C and Cl . These relations between a pure strategy in the equivalent normal form game and pure strategies in the individual games with each followers are key in showing these problems are equivalent. 5.2 Decomposed MILP We can linearize the quadratic programming problem 7 through the change of variables zl ij = xiql j, obtaining the following problem maxq,z P i∈X P l∈L P j∈Q pl k Rl ijzl ij s.t. P i∈X P j∈Q zl ij = k P j∈Q zl ij ≤ k kql j ≤ P i∈X zl ij ≤ k P j∈Q ql j = 1 0 ≤ (al − P i∈X 1 k Cl ij( P h∈Q zl ih)) ≤ (1 − ql j)M P j∈Q zl ij = P j∈Q z1 ij zl ij ∈ {0, 1, ..., k} ql j ∈ {0, 1} (8) PROPOSITION 2. Problems (7) and (8) are equivalent. Proof: Consider x, ql , al with l ∈ L a feasible solution of (7). We will show that ql , al , zl ij = xiql j is a feasible solution of (8) of same objective function value. The equivalence of the objective functions, and constraints 4, 7 and 8 of (8) are satisfied by construction. The fact that P j∈Q zl ij = xi as P j∈Q ql j = 1 explains constraints 1, 2, 5 and 6 of (8). Constraint 3 of (8) is satisfied because P i∈X zl ij = kql j. Lets now consider ql , zl , al feasible for (8). We will show that ql , al and xi = P j∈Q z1 ij are feasible for (7) with the same objective value. In fact all constraints of (7) are readily satisfied by construction. To see that the objectives match, notice for each l one ql j must equal 1 and the rest equal 0. Let us say that ql jl = 1, then the third constraint in (8) implies that P i∈X zl ijl = k, which means that zl ij = 0 for all i ∈ X and all j = jl. In particular this implies that xi = X j∈Q z1 ij = z1 ij1 = zl ijl , the last equality from constraint 6 of (8). Therefore xiql j = zl ijl ql j = zl ij. This last equality is because both are 0 when j = jl. Effectively, constraint 6 ensures that all the adversaries are calculating their best responses against a particular fixed policy of the agent. This shows that the transformation preserves the objective function value, completing the proof. We can therefore solve this equivalent linear integer program with efficient integer programming packages which can handle problems with thousands of integer variables. We implemented the decomposed MILP and the results are shown in the following section. 6. EXPERIMENTAL RESULTS The patrolling domain and the payoffs for the associated game are detailed in Sections 2 and 3. We performed experiments for this game in worlds of three and four houses with patrols consisting of two houses. The description given in Section 2 is used to generate a base case for both the security agent and robber payoff functions. The payoff tables for additional robber types are constructed and added to the game by adding a random distribution of varying size to the payoffs in the base case. All games are normalized so that, for each robber type, the minimum and maximum payoffs to the security agent and robber are 0 and 1, respectively. Using the data generated, we performed the experiments using four methods for generating the security agent``s strategy: • uniform randomization • ASAP • the multiple linear programs method from [5] (to find the true optimal strategy) • the highest reward Bayes-Nash equilibrium, found using the MIP-Nash algorithm [17] The last three methods were applied using CPLEX 8.1. Because the last two methods are designed for normal-form games rather than Bayesian games, the games were first converted using the Harsanyi transformation [8]. The uniform randomization method is simply choosing a uniform random policy over all possible patrol routes. We use this method as a simple baseline to measure the performance of our heuristics. We anticipated that the uniform policy would perform reasonably well since maximum-entropy policies have been shown to be effective in multiagent security domains [14]. The highest-reward Bayes-Nash equilibria were used in order to demonstrate the higher reward gained by looking for an optimal policy rather than an equilibria in Stackelberg games such as our security domain. Based on our experiments we present three sets of graphs to demonstrate (1) the runtime of ASAP compared to other common methods for finding a strategy, (2) the reward guaranteed by ASAP compared to other methods, and (3) the effect of varying the parameter k, the size of the multiset, on the performance of ASAP. In the first two sets of graphs, ASAP is run using a multiset of 80 elements; in the third set this number is varied. The first set of graphs, shown in Figure 1 shows the runtime graphs for three-house (left column) and four-house (right column) domains. Each of the three rows of graphs corresponds to a different randomly-generated scenario. The x-axis shows the number of robber types the security agent faces and the y-axis of the graph shows the runtime in seconds. All experiments that were not concluded in 30 minutes (1800 seconds) were cut off. The runtime for the uniform policy is always negligible irrespective of the number of adversaries and hence is not shown. The ASAP algorithm clearly outperforms the optimal, multipleLP method as well as the MIP-Nash algorithm for finding the highestreward Bayes-Nash equilibrium with respect to runtime. For a 316 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Runtimes for various algorithms on problems of 3 and 4 houses. domain of three houses, the optimal method cannot reach a solution for more than seven robber types, and for four houses it cannot solve for more than six types within the cutoff time in any of the three scenarios. MIP-Nash solves for even fewer robber types within the cutoff time. On the other hand, ASAP runs much faster, and is able to solve for at least 20 adversaries for the three-house scenarios and for at least 12 adversaries in the four-house scenarios within the cutoff time. The runtime of ASAP does not increase strictly with the number of robber types for each scenario, but in general, the addition of more types increases the runtime required. The second set of graphs, Figure 2, shows the reward to the patrol agent given by each method for three scenarios in the three-house (left column) and four-house (right column) domains. This reward is the utility received by the security agent in the patrolling game, and not as a percentage of the optimal reward, since it was not possible to obtain the optimal reward as the number of robber types increased. The uniform policy consistently provides the lowest reward in both domains; while the optimal method of course produces the optimal reward. The ASAP method remains consistently close to the optimal, even as the number of robber types increases. The highest-reward Bayes-Nash equilibria, provided by the MIPNash method, produced rewards higher than the uniform method, but lower than ASAP. This difference clearly illustrates the gains in the patrolling domain from committing to a strategy as the leader in a Stackelberg game, rather than playing a standard Bayes-Nash strategy. The third set of graphs, shown in Figure 3 shows the effect of the multiset size on runtime in seconds (left column) and reward (right column), again expressed as the reward received by the security agent in the patrolling game, and not a percentage of the optimal Figure 2: Reward for various algorithms on problems of 3 and 4 houses. reward. Results here are for the three-house domain. The trend is that as as the multiset size is increased, the runtime and reward level both increase. Not surprisingly, the reward increases monotonically as the multiset size increases, but what is interesting is that there is relatively little benefit to using a large multiset in this domain. In all cases, the reward given by a multiset of 10 elements was within at least 96% of the reward given by an 80-element multiset. The runtime does not always increase strictly with the multiset size; indeed in one example (scenario 2 with 20 robber types), using a multiset of 10 elements took 1228 seconds, while using 80 elements only took 617 seconds. In general, runtime should increase since a larger multiset means a larger domain for the variables in the MILP, and thus a larger search space. However, an increase in the number of variables can sometimes allow for a policy to be constructed more quickly due to more flexibility in the problem. 7. SUMMARY AND RELATED WORK This paper focuses on security for agents patrolling in hostile environments. In these environments, intentional threats are caused by adversaries about whom the security patrolling agents have incomplete information. Specifically, we deal with situations where the adversaries'' actions and payoffs are known but the exact adversary type is unknown to the security agent. Agents acting in the real world quite frequently have such incomplete information about other agents. Bayesian games have been a popular choice to model such incomplete information games [3]. The Gala toolkit is one method for defining such games [9] without requiring the game to be represented in normal form via the Harsanyi transformation [8]; Gala``s guarantees are focused on fully competitive games. Much work has been done on finding optimal Bayes-Nash equilbria for The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 317 Figure 3: Reward for ASAP using multisets of 10, 30, and 80 elements subclasses of Bayesian games, finding single Bayes-Nash equilibria for general Bayesian games [10] or approximate Bayes-Nash equilibria [18]. Less attention has been paid to finding the optimal strategy to commit to in a Bayesian game (the Stackelberg scenario [15]). However, the complexity of this problem was shown to be NP-hard in the general case [5], which also provides algorithms for this problem in the non-Bayesian case. Therefore, we present a heuristic called ASAP, with three key advantages towards addressing this problem. First, ASAP searches for the highest reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game. Second, it provides strategies which are simple to understand, represent, and implement. Third, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form. We provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches. Our k-uniform strategies are similar to the k-uniform strategies of [12]. While that work provides epsilon error-bounds based on the k-uniform strategies, their solution concept is still that of a Nash equilibrium, and they do not provide efficient algorithms for obtaining such k-uniform strategies. This contrasts with ASAP, where our emphasis is on a highly efficient heuristic approach that is not focused on equilibrium solutions. Finally the patrolling problem which motivated our work has recently received growing attention from the multiagent community due to its wide range of applications [4, 13]. However most of this work is focused on either limiting energy consumption involved in patrolling [7] or optimizing on criteria like the length of the path traveled [4, 13], without reasoning about any explicit model of an adversary[14]. Acknowledgments : This research is supported by the United States Department of Homeland Security through Center for Risk and Economic Analysis of Terrorism Events (CREATE). It is also supported by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division, under Contract No. NBCHD030010. Sarit Kraus is also affiliated with UMIACS. 8. REFERENCES [1] R. W. Beard and T. McLain. Multiple UAV cooperative search under collision avoidance and limited range communication constraints. In IEEE CDC, 2003. [2] D. Bertsimas and J. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997. [3] J. Brynielsson and S. Arnborg. Bayesian games for threat prediction and situation analysis. In FUSION, 2004. [4] Y. Chevaleyre. Theoretical analysis of multi-agent patrolling problem. In AAMAS, 2004. [5] V. Conitzer and T. Sandholm. Choosing the best strategy to commit to. In ACM Conference on Electronic Commerce, 2006. [6] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991. [7] C. Gui and P. Mohapatra. Virtual patrol: A new power conservation design for surveillance using sensor networks. In IPSN, 2005. [8] J. C. Harsanyi and R. Selten. A generalized Nash solution for two-person bargaining games with incomplete information. Management Science, 18(5):80-106, 1972. [9] D. Koller and A. Pfeffer. Generating and solving imperfect information games. In IJCAI, pages 1185-1193, 1995. [10] D. Koller and A. Pfeffer. Representations and solutions for game-theoretic problems. Artificial Intelligence, 94(1):167-215, 1997. [11] C. Lemke and J. Howson. Equilibrium points of bimatrix games. Journal of the Society for Industrial and Applied Mathematics, 12:413-423, 1964. [12] R. J. Lipton, E. Markakis, and A. Mehta. Playing large games using simple strategies. In ACM Conference on Electronic Commerce, 2003. [13] A. Machado, G. Ramalho, J. D. Zucker, and A. Drougoul. Multi-agent patrolling: an empirical analysis on alternative architectures. In MABS, 2002. [14] P. Paruchuri, M. Tambe, F. Ordonez, and S. Kraus. Security in multiagent systems by policy randomization. In AAMAS, 2006. [15] T. Roughgarden. Stackelberg scheduling strategies. In ACM Symposium on TOC, 2001. [16] S. Ruan, C. Meirina, F. Yu, K. R. Pattipati, and R. L. Popp. Patrolling in a stochastic environment. In 10th Intl.. Command and Control Research Symp., 2005. [17] T. Sandholm, A. Gilpin, and V. Conitzer. Mixed-integer programming methods for finding nash equilibria. In AAAI, 2005. [18] S. Singh, V. Soni, and M. Wellman. Computing approximate Bayes-Nash equilibria with tree-games of incomplete information. In ACM Conference on Electronic Commerce, 2004. 318 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
An Efficient Heuristic Approach for Security Against Multiple Adversaries ABSTRACT In adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue. This paper focuses on domains where these threats come from unknown adversaries. These domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games. However, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies. In this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium. Previous work has shown this problem of optimal strategy selection to be NP-hard. Therefore, we present a heuristic called ASAP, with three key advantages to address the problem. First, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game. Second, it provides strategies which are simple to understand, represent, and implement. Third, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form. We provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches. 1. INTRODUCTION In many multiagent domains, agents must act in order to provide security against attacks by adversaries. A common issue that agents face in such security domains is uncertainty about the ad versaries they may be facing. For example, a security robot may need to make a choice about which areas to patrol, and how often [16]. However, it will not know in advance exactly where a robber will choose to strike. A team of unmanned aerial vehicles (UAVs) [1] monitoring a region undergoing a humanitarian crisis may also need to choose a patrolling policy. They must make this decision without knowing in advance whether terrorists or other adversaries may be waiting to disrupt the mission at a given location. It may indeed be possible to model the motivations of types of adversaries the agent or agent team is likely to face in order to target these adversaries more closely. However, in both cases, the security robot or UAV team will not know exactly which kinds of adversaries may be active on any given day. A common approach for choosing a policy for agents in such scenarios is to model the scenarios as Bayesian games. A Bayesian game is a game in which agents may belong to one or more types; the type of an agent determines its possible actions and payoffs. The distribution of adversary types that an agent will face may be known or inferred from historical data. Usually, these games are analyzed according to the solution concept of a Bayes-Nash equilibrium, an extension of the Nash equilibrium for Bayesian games. However, in many settings, a Nash or Bayes-Nash equilibrium is not an appropriate solution concept, since it assumes that the agents' strategies are chosen simultaneously [5]. In some settings, one player can (or must) commit to a strategy before the other players choose their strategies. These scenarios are known as Stackelberg games [6]. In a Stackelberg game, a leader commits to a strategy first, and then a follower (or group of followers) selfishly optimize their own rewards, considering the action chosen by the leader. For example, the security agent (leader) must first commit to a strategy for patrolling various areas. This strategy could be a mixed strategy in order to be unpredictable to the robbers (followers). The robbers, after observing the pattern of patrols over time, can then choose their strategy (which location to rob). Often, the leader in a Stackelberg game can attain a higher reward than if the strategies were chosen simultaneously. To see the advantage of being the leader in a Stackelberg game, consider a simple game with the payoff table as shown in Table 1. The leader is the row player and the follower is the column player. Here, the leader's payoff is listed first. Table 1: Payoff table for example normal form game. The only Nash equilibrium for this game is when the leader plays 2 and the follower plays 2 which gives the leader a payoff of 2. However, if the leader commits to a uniform mixed strategy of playing 1 and 2 with equal (0.5) probability, the follower's best response is to play 3 to get an expected payoff of 5 (10 and 0 with equal probability). The leader's payoff would then be 4 (3 and 5 with equal probability). In this case, the leader now has an incentive to deviate and choose a pure strategy of 2 (to get a payoff of 5). However, this would cause the follower to deviate to strategy 2 as well, resulting in the Nash equilibrium. Thus, by committing to a strategy that is observed by the follower, and by avoiding the temptation to deviate, the leader manages to obtain a reward higher than that of the best Nash equilibrium. The problem of choosing an optimal strategy for the leader to commit to in a Stackelberg game is analyzed in [5] and found to be NP-hard in the case of a Bayesian game with multiple types of followers. Thus, efficient heuristic techniques for choosing highreward strategies in these games is an important open issue. Methods for finding optimal leader strategies for non-Bayesian games [5] can be applied to this problem by converting the Bayesian game into a normal-form game by the Harsanyi transformation [8]. If, on the other hand, we wish to compute the highest-reward Nash equilibrium, new methods using mixed-integer linear programs (MILPs) [17] may be used, since the highest-reward Bayes-Nash equilibrium is equivalent to the corresponding Nash equilibrium in the transformed game. However, by transforming the game, the compact structure of the Bayesian game is lost. In addition, since the Nash equilibrium assumes a simultaneous choice of strategies, the advantages of being the leader are not considered. This paper introduces an efficient heuristic method for approximating the optimal leader strategy for security domains, known as ASAP (Agent Security via Approximate Policies). This method has three key advantages. First, it directly searches for an optimal strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing it to find high-reward non-equilibrium strategies like the one in the above example. Second, it generates policies with a support which can be expressed as a uniform distribution over a multiset of fixed size as proposed in [12]. This allows for policies that are simple to understand and represent [12], as well as a tunable parameter (the size of the multiset) that controls the simplicity of the policy. Third, the method allows for a Bayes-Nash game to be expressed compactly without conversion to a normal-form game, allowing for large speedups over existing Nash methods such as [17] and [11]. The rest of the paper is organized as follows. In Section 2 we fully describe the patrolling domain and its properties. Section 3 introduces the Bayesian game, the Harsanyi transformation, and existing methods for finding an optimal leader's strategy in a Stackelberg game. Then, in Section 4 the ASAP algorithm is presented for normal-form games, and in Section 5 we show how it can be adapted to the structure of Bayesian games with uncertain adversaries. Experimental results showing higher reward and faster policy computation over existing Nash methods are shown in Section 6, and we conclude with a discussion of related work in Section 7. 2. THE PATROLLING DOMAIN In most security patrolling domains, the security agents (like UAVs [1] or security robots [16]) cannot feasibly patrol all areas all the time. Instead, they must choose a policy by which they patrol various routes at different times, taking into account factors such as the likelihood of crime in different areas, possible targets for crime, and the security agents' own resources (number of security agents, amount of available time, fuel, etc.). It is usually beneficial for this policy to be nondeterministic so that robbers cannot safely rob certain locations, knowing that they will be safe from the security agents [14]. To demonstrate the utility of our algorithm, we use a simplified version of such a domain, expressed as a game. The most basic version of our game consists of two players: the security agent (the leader) and the robber (the follower) in a world consisting of m houses, 1...m. The security agent's set of pure strategies consists of possible routes of d houses to patrol (in an order). The security agent can choose a mixed strategy so that the robber will be unsure of exactly where the security agent may patrol, but the robber will know the mixed strategy the security agent has chosen. For example, the robber can observe over time how often the security agent patrols each area. With this knowledge, the robber must choose a single house to rob. We assume that the robber generally takes a long time to rob a house. If the house chosen by the robber is not on the security agent's route, then the robber successfully robs it. Otherwise, if it is on the security agent's route, then the earlier the house is on the route, the easier it is for the security agent to catch the robber before he finishes robbing it. We model the payoffs for this game with the following variables: • vl, x: value of the goods in house l to the security agent. • vl, q: value of the goods in house l to the robber. • cx: reward to the security agent of catching the robber. • cq: cost to the robber of getting caught. • pl: probability that the security agent can catch the robber at the lth house in the patrol (pl <pl, ⇐ ⇒ l' <l). The security agent's set of possible pure strategies (patrol routes) is denoted by X and includes all d-tuples i = <w1, w2,..., wd> with w1...wd = 1...m where no two elements are equal (the agent is not allowed to return to the same house). The robber's set of possible pure strategies (houses to rob) is denoted by Q and includes all integers j = 1...m. The payoffs (security agent, robber) for pure strategies i, j are: • − vl, x, vl, q, for j = l ∈ / i. • plcx + (1 − pl) (− vl, x), − plcq + (1 − pl) (vl, q), for j = l ∈ i. With this structure it is possible to model many different types of robbers who have differing motivations; for example, one robber may have a lower cost of getting caught than another, or may value the goods in the various houses differently. If the distribution of different robber types is known or inferred from historical data, then the game can be modeled as a Bayesian game [6]. 3. BAYESIAN GAMES A Bayesian game contains a set of N agents, and each agent n must be one of a given set of types 0n. For our patrolling domain, we have two agents, the security agent and the robber. 01 is the set of security agent types and 02 is the set of robber types. Since there is only one type of security agent, 01 contains only one element. During the game, the robber knows its type but the security agent does not know the robber's type. For each agent (the security agent or the robber) n, there is a set of strategies σn and a utility function un: 01 × 02 × σ1 × σ2 → R. A Bayesian game can be transformed into a normal-form game using the Harsanyi transformation [8]. Once this is done, new, linear-program (LP) - based methods for finding high-reward strategies for normal-form games [5] can be used to find a strategy in the transformed game; this strategy can then be used for the Bayesian game. While methods exist for finding Bayes-Nash equilibria directly, without the Harsanyi transformation [10], they find only a 312 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) single equilibrium in the general case, which may not be of high reward. Recent work [17] has led to efficient mixed-integer linear program techniques to find the best Nash equilibrium for a given agent. However, these techniques do require a normal-form game, and so to compare the policies given by ASAP against the optimal policy, as well as against the highest-reward Nash equilibrium, we must apply these techniques to the Harsanyi-transformed matrix. The next two subsections elaborate on how this is done. 3.1 Harsanyi Transformation The first step in solving Bayesian games is to apply the Harsanyi transformation [8] that converts the Bayesian game into a normal form game. Given that the Harsanyi transformation is a standard concept in game theory, we explain it briefly through a simple example in our patrolling domain without introducing the mathematical formulations. Let us assume there are two robber types a and b in the Bayesian game. Robber a will be active with probability α, and robber b will be active with probability 1--α. The rules described in Section 2 allow us to construct simple payoff tables. Assume that there are two houses in the world (1 and 2) and hence there are two patrol routes (pure strategies) for the agent: 11,21 and 12,11. The robber can rob either house 1 or house 2 and hence he has two strategies (denoted as 1l, 2l for robber type l). Since there are two types assumed (denoted as a and b), we construct two payoff tables (shown in Table 2) corresponding to the security agent playing a separate game with each of the two robber types with probabilities α and 1--α. First, consider robber type a. Borrowing the notation from the domain section, we assign the following values to the variables: v1, x = v1, q = 3/4, v2, x = v2, q = 1/4, cx = 1/2, cq = 1, p1 = 1, p2 = 1/2. Using these values we construct a base payoff table as the payoff for the game against robber type a. For example, if the security agent chooses route 11,21 when robber a is active, and robber a chooses house 1, the robber receives a reward of -1 (for being caught) and the agent receives a reward of 0.5 for catching the robber. The payoffs for the game against robber type b are constructed using different values. Table 2: Payoff tables: Security Agent vs Robbers a and b Using the Harsanyi technique involves introducing a chance node, that determines the robber's type, thus transforming the security agent's incomplete information regarding the robber into imperfect information [3]. The Bayesian equilibrium of the game is then precisely the Nash equilibrium of the imperfect information game. The transformed, normal-form game is shown in Table 3. In the transformed game, the security agent is the column player, and the set of all robber types together is the row player. Suppose that robber type a robs house 1 and robber type b robs house 2, while the security agent chooses patrol 11,21. Then, the security agent and the robber receive an expected payoff corresponding to their payoffs from the agent encountering robber a at house 1 with probability α and robber b at house 2 with probability 1--α. 3.2 Finding an Optimal Strategy Although a Nash equilibrium is the standard solution concept for games in which agents choose strategies simultaneously, in our security domain, the security agent (the leader) can gain an advantage by committing to a mixed strategy in advance. Since the followers (the robbers) will know the leader's strategy, the optimal response for the followers will be a pure strategy. Given the common assumption, taken in [5], in the case where followers are indifferent, they will choose the strategy that benefits the leader, there must exist a guaranteed optimal strategy for the leader [5]. From the Bayesian game in Table 2, we constructed the Harsanyi transformed bimatrix in Table 3. The strategies for each player (security agent or robber) in the transformed game correspond to all combinations of possible strategies taken by each of that player's types. Therefore, we denote X = σθ11 = σ1 and Q = σθ2 2 as the index sets of the security agent and robbers' pure strategies respectively, with R and C as the corresponding payoff matrices. Rij is the reward of the security agent and Cij is the reward of the robbers when the security agent takes pure strategy i and the robbers take pure strategy j. A mixed strategy for the security agent is a probability distribution over its set of pure strategies and will be represented by a vector x = (px1, px2,..., pxIXI), where pxi> 0 and E pxi = 1. Here, pxi is the probability that the security agent will choose its ith pure strategy. The optimal mixed strategy for the security agent can be found in time polynomial in the number of rows in the normal form game using the following linear program formulation from [5]. For every possible pure strategy j by the follower (the set of all robber types), Then, for all feasible follower strategies j, choose the one that maximizes EiEX pxiRij, the reward for the security agent (leader). The pxi variables give the optimal strategy for the security agent. Note that while this method is polynomial in the number of rows in the transformed, normal-form game, the number of rows increases exponentially with the number of robber types. Using this method for a Bayesian game thus requires running 1σ211θ21 separate linear programs. This is no surprise, since finding the leader's optimal strategy in a Bayesian Stackelberg game is NP-hard [5]. 4. HEURISTIC APPROACHES Given that finding the optimal strategy for the leader is NP-hard, we provide a heuristic approach. In this heuristic we limit the possible mixed strategies of the leader to select actions with probabilities that are integer multiples of 1/k for a predetermined integer k. Previous work [14] has shown that strategies with high entropy are beneficial for security applications when opponents' utilities are completely unknown. In our domain, if utilities are not considered, this method will result in uniform-distribution strategies. One advantage of such strategies is that they are compact to represent (as fractions) and simple to understand; therefore they can be efficiently implemented by real organizations. We aim to maintain the advantage provided by simple strategies for our security application problem, incorporating the effect of the robbers' rewards on the security agent's rewards. Thus, the ASAP heuristic will produce strategies which are k-uniform. A mixed strategy is denoted k-uniform if it is a uniform distribution on a multiset S of pure strategies with IS = k. A multiset is a set whose elements may be repeated multiple times; thus, for example, the mixed strategy corresponding to the multiset 11, 1, 21 would take strategy 1 Table 3: Harsanyi Transformed Payoff Table with probability 2/3 and strategy 2 with probability 1/3. ASAP allows the size of the multiset to be chosen in order to balance the complexity of the strategy reached with the goal that the identified strategy will yield a high reward. Another advantage of the ASAP heuristic is that it operates directly on the compact Bayesian representation, without requiring the Harsanyi transformation. This is because the different follower (robber) types are independent of each other. Hence, evaluating the leader strategy against a Harsanyi-transformed game matrix is equivalent to evaluating against each of the game matrices for the individual follower types. This independence property is exploited in ASAP to yield a decomposition scheme. Note that the LP method introduced by [5] to compute optimal Stackelberg policies is unlikely to be decomposable into a small number of games as it was shown to be NP-hard for Bayes-Nash problems. Finally, note that ASAP requires the solution of only one optimization problem, rather than solving a series of problems as in the LP method of [5]. For a single follower type, the algorithm works the following way. Given a particular k, for each possible mixed strategy x for the leader that corresponds to a multiset of size k, evaluate the leader's payoff from x when the follower plays a reward-maximizing pure strategy. We then take the mixed strategy with the highest payoff. We need only to consider the reward-maximizing pure strategies of the followers (robbers), since for a given fixed strategy x of the security agent, each robber type faces a problem with fixed linear rewards. If a mixed strategy is optimal for the robber, then so are all the pure strategies in the support of that mixed strategy. Note also that because we limit the leader's strategies to take on discrete values, the assumption from Section 3.2 that the followers will break ties in the leader's favor is not significant, since ties will be unlikely to arise. This is because, in domains where rewards are drawn from any random distribution, the probability of a follower having more than one pure optimal response to a given leader strategy approaches zero, and the leader will have only a finite number of possible mixed strategies. Our approach to characterize the optimal strategy for the security agent makes use of properties of linear programming. We briefly outline these results here for completeness, for detailed discussion and proofs see one of many references on the topic, such as [2]. Every linear programming problem, such as: max cT x I Ax = b, x> 0, has an associated dual linear program, in this case: min bT y I AT y> c. These primal/dual pairs of problems satisfy weak duality: For any x and y primal and dual feasible solutions respectively, cT x <bT y. Thus a pair of feasible solutions is optimal if cT x = bT y, and the problems are said to satisfy strong duality. In fact if a linear program is feasible and has a bounded optimal solution, then the dual is also feasible and there is a pair x', y' that satisfies cT x' = bT y'. These optimal solutions are characterized with the following optimality conditions (as defined in [2]): • primal feasibility: Ax = b, x> 0 • dual feasibility: AT y> c • complementary slackness: xi (AT y--c) i = 0 for all i. Note that this last condition implies that which proves optimality for primal dual feasible solutions x and y. In the following subsections, we first define the problem in its most intuititive form as a mixed-integer quadratic program (MIQP), and then show how this problem can be converted into a mixedinteger linear program (MILP). 4.1 Mixed-Integer Quadratic Program We begin with the case of a single type of follower. Let the leader be the row player and the follower the column player. We denote by x the vector of strategies of the leader and q the vector of strategies of the follower. We also denote X and Q the index sets of the leader and follower's pure strategies, respectively. The payoff matrices R and C correspond to: Rij is the reward of the leader and Cij is the reward of the follower when the leader takes pure strategy i and the follower takes pure strategy j. Let k be the size of the multiset. We first fix the policy of the leader to some k-uniform policy x. The value xi is the number of times pure strategy i is used in the k-uniform policy, which is selected with probability xi/k. We formulate the optimization problem the follower solves to find its optimal response to x as the following linear program: The objective function maximizes the follower's expected reward given x, while the constraints make feasible any mixed strategy q for the follower. The dual to this linear programming problem is the following: From strong duality and complementary slackness we obtain that the follower's maximum reward value, a, is the value of every pure strategy with qj> 0, that is in the support of the optimal mixed strategy. Therefore each of these pure strategies is optimal. Optimal solutions to the follower's problem are characterized by linear programming optimality conditions: primal feasibility constraints in (2), dual feasibility constraints in (3), and complementary slackness These conditions must be included in the problem solved by the leader in order to consider only best responses by the follower to the k-uniform policy x. 314 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The leader seeks the k-uniform solution x that maximizes its own payoff, given that the follower uses an optimal response q (x). Therefore the leader solves the following integer problem: Problem (4) maximizes the leader's reward with the follower's best response (qj for fixed leader's policy x and hence denoted q (x) j) by selecting a uniform policy from a multiset of constant size k. We complete this problem by including the characterization of q (x) through linear programming optimality conditions. To simplify writing the complementary slackness conditions, we will constrain q (x) to be only optimal pure strategies by just considering integer solutions of q (x). The leader's problem becomes: Here, the constant M is some large number. The first and fourth constraints enforce a k-uniform policy for the leader, and the second and fifth constraints enforce a feasible pure strategy for the follower. The third constraint enforces dual feasibility of the follower's problem (leftmost inequality) and the complementary slackness constraint for an optimal pure strategy q for the follower (rightmost inequality). In fact, since only one pure strategy can be selected by the follower, say qh = 1, this last constraint enforces that pure strategies which have qj = 0. We conclude this subsection noting that Problem (5) is an integer program with a non-convex quadratic objective in general, as the matrix R need not be positive-semi-definite. Efficient solution methods for non-linear, non-convex integer problems remains a challenging research question. In the next section we show a reformulation of this problem as a linear integer programming problem, for which a number of efficient commercial solvers exist. 4.2 Mixed-Integer Linear Program We can linearize the quadratic program of Problem 5 through the change of variables zij = xiqj, obtaining the following problem Proof: Consider x, q a feasible solution of (5). We will show that q, zij = xiqj is a feasible solution of (6) of same objective function value. The equivalence of the objective functions, and constraints 4, 6 and 7 of (6) are satisfied by construction. The fact that EjEQ zij = xi as EjEQ qj = 1 explains constraints 1, 2, and 5 of (6). Constraint 3 of (6) is satisfied because E iEX zij = kqj. Let us now consider q, z feasible for (6). We will show that q and jEQ zij are feasible for (5) with the same objective value. In fact all constraints of (5) are readily satisfied by construction. To constraint in (6) implies that E see that the objectives match, notice that if qh = 1 then the third This last equality is because both are 0 when j = ~ h. This shows that the transformation preserves the objective function value, completing the proof. Given this transformation to a mixed-integer linear program (MILP), we now show how we can apply our decomposition technique on the MILP to obtain significant speedups for Bayesian games with multiple follower types. 5. DECOMPOSITION FOR MULTIPLE ADVERSARIES The MILP developed in the previous section handles only one follower. Since our security scenario contains multiple follower (robber) types, we change the response function for the follower from a pure strategy into a weighted combination over various pure follower strategies where the weights are probabilities of occurrence of each of the follower types. 5.1 Decomposed MIQP To admit multiple adversaries in our framework, we modify the notation defined in the previous section to reason about multiple follower types. We denote by x the vector of strategies of the leader and ql the vector of strategies of follower l, with L denoting the index set of follower types. We also denote by X and Q the index sets of leader and follower l's pure strategies, respectively. We also index the payoff matrices on each follower l, considering the matrices Rl and Cl. Using this modified notation, we characterize the optimal solution of follower l's problem given the leaders k-uniform policy x, with the following optimality conditions: Again, considering only optimal pure strategies for follower l's problem we can linearize the complementarity constraint above. We incorporate these constraints on the leader's problem that selects the optimal k-uniform policy. Therefore, given a priori probabilities pl, with l ∈ L of facing each follower, the leader solves the following problem: Problem (7) for a Bayesian game with multiple follower types is indeed equivalent to Problem (5) on the payoff matrix obtained from the Harsanyi transformation of the game. In fact, every pure strategy j in Problem (5) corresponds to a sequence of pure strategies jl, one for each follower l E L. This means that qj = 1 if and only if qlj, = 1 for all l E L. In addition, given the a priori probabilities pl of facing player l, the reward in the Harsanyi transformation payoff table is Rij = E lEL plRlij,. The same relation holds between C and Cl. These relations between a pure strategy in the equivalent normal form game and pure strategies in the individual games with each followers are key in showing these problems are equivalent. 5.2 Decomposed MILP We can linearize the quadratic programming problem 7 through the change of variables zlij = xiqlj, obtaining the following prob Proof: Consider x, ql, al with l E L a feasible solution of (7). We will show that ql, al, zlij = xiqlj is a feasible solution of (8) of same objective function value. The equivalence of the objective functions, and constraints 4, 7 and 8 of (8) are satisfied by construction. The fact that EjEQ zlij = xi as EjEQ qlj = 1 explains constraints 1, 2, 5 and 6 of (8). Constraint 3 of (8) is satisfied because E iEX zlij = kqlj. ql, al and xi = E Lets now consider ql, zl, al feasible for (8). We will show that jEQ z1 ij are feasible for (7) with the same objective value. In fact all constraints of (7) are readily satisfied by construction. To see that the objectives match, notice for each l one qlj must equal 1 and the rest equal 0. Let us say that qlj, = 1, then the third constraint in (8) implies that E iEX zlij, = k, which means that zlij = 0 for all i E X and all j = ~ jl. In particular this implies that the last equality from constraint 6 of (8). Therefore xiqlj = zlij, qlj = zlij. This last equality is because both are 0 when j = ~ jl. Effectively, constraint 6 ensures that all the adversaries are calculating their best responses against a particular fixed policy of the agent. This shows that the transformation preserves the objective function value, completing the proof. We can therefore solve this equivalent linear integer program with efficient integer programming packages which can handle problems with thousands of integer variables. We implemented the decomposed MILP and the results are shown in the following section. 6. EXPERIMENTAL RESULTS The patrolling domain and the payoffs for the associated game are detailed in Sections 2 and 3. We performed experiments for this game in worlds of three and four houses with patrols consisting of two houses. The description given in Section 2 is used to generate a base case for both the security agent and robber payoff functions. The payoff tables for additional robber types are constructed and added to the game by adding a random distribution of varying size to the payoffs in the base case. All games are normalized so that, for each robber type, the minimum and maximum payoffs to the security agent and robber are 0 and 1, respectively. Using the data generated, we performed the experiments using four methods for generating the security agent's strategy: • uniform randomization • ASAP • the multiple linear programs method from [5] (to find the true optimal strategy) • the highest reward Bayes-Nash equilibrium, found using the MIP-Nash algorithm [17] The last three methods were applied using CPLEX 8.1. Because the last two methods are designed for normal-form games rather than Bayesian games, the games were first converted using the Harsanyi transformation [8]. The uniform randomization method is simply choosing a uniform random policy over all possible patrol routes. We use this method as a simple baseline to measure the performance of our heuristics. We anticipated that the uniform policy would perform reasonably well since maximum-entropy policies have been shown to be effective in multiagent security domains [14]. The highest-reward Bayes-Nash equilibria were used in order to demonstrate the higher reward gained by looking for an optimal policy rather than an equilibria in Stackelberg games such as our security domain. Based on our experiments we present three sets of graphs to demonstrate (1) the runtime of ASAP compared to other common methods for finding a strategy, (2) the reward guaranteed by ASAP compared to other methods, and (3) the effect of varying the parameter k, the size of the multiset, on the performance of ASAP. In the first two sets of graphs, ASAP is run using a multiset of 80 elements; in the third set this number is varied. The first set of graphs, shown in Figure 1 shows the runtime graphs for three-house (left column) and four-house (right column) domains. Each of the three rows of graphs corresponds to a different randomly-generated scenario. The x-axis shows the number of robber types the security agent faces and the y-axis of the graph shows the runtime in seconds. All experiments that were not concluded in 30 minutes (1800 seconds) were cut off. The runtime for the uniform policy is always negligible irrespective of the number of adversaries and hence is not shown. The ASAP algorithm clearly outperforms the optimal, multipleLP method as well as the MIP-Nash algorithm for finding the highestreward Bayes-Nash equilibrium with respect to runtime. For a 316 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Runtimes for various algorithms on problems of 3 and 4 houses. domain of three houses, the optimal method cannot reach a solution for more than seven robber types, and for four houses it cannot solve for more than six types within the cutoff time in any of the three scenarios. MIP-Nash solves for even fewer robber types within the cutoff time. On the other hand, ASAP runs much faster, and is able to solve for at least 20 adversaries for the three-house scenarios and for at least 12 adversaries in the four-house scenarios within the cutoff time. The runtime of ASAP does not increase strictly with the number of robber types for each scenario, but in general, the addition of more types increases the runtime required. The second set of graphs, Figure 2, shows the reward to the patrol agent given by each method for three scenarios in the three-house (left column) and four-house (right column) domains. This reward is the utility received by the security agent in the patrolling game, and not as a percentage of the optimal reward, since it was not possible to obtain the optimal reward as the number of robber types increased. The uniform policy consistently provides the lowest reward in both domains; while the optimal method of course produces the optimal reward. The ASAP method remains consistently close to the optimal, even as the number of robber types increases. The highest-reward Bayes-Nash equilibria, provided by the MIPNash method, produced rewards higher than the uniform method, but lower than ASAP. This difference clearly illustrates the gains in the patrolling domain from committing to a strategy as the leader in a Stackelberg game, rather than playing a standard Bayes-Nash strategy. The third set of graphs, shown in Figure 3 shows the effect of the multiset size on runtime in seconds (left column) and reward (right column), again expressed as the reward received by the security agent in the patrolling game, and not a percentage of the optimal Figure 2: Reward for various algorithms on problems of 3 and 4 houses. reward. Results here are for the three-house domain. The trend is that as as the multiset size is increased, the runtime and reward level both increase. Not surprisingly, the reward increases monotonically as the multiset size increases, but what is interesting is that there is relatively little benefit to using a large multiset in this domain. In all cases, the reward given by a multiset of 10 elements was within at least 96% of the reward given by an 80-element multiset. The runtime does not always increase strictly with the multiset size; indeed in one example (scenario 2 with 20 robber types), using a multiset of 10 elements took 1228 seconds, while using 80 elements only took 617 seconds. In general, runtime should increase since a larger multiset means a larger domain for the variables in the MILP, and thus a larger search space. However, an increase in the number of variables can sometimes allow for a policy to be constructed more quickly due to more flexibility in the problem.
An Efficient Heuristic Approach for Security Against Multiple Adversaries ABSTRACT In adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue. This paper focuses on domains where these threats come from unknown adversaries. These domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games. However, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies. In this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium. Previous work has shown this problem of optimal strategy selection to be NP-hard. Therefore, we present a heuristic called ASAP, with three key advantages to address the problem. First, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game. Second, it provides strategies which are simple to understand, represent, and implement. Third, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form. We provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches. 1. INTRODUCTION In many multiagent domains, agents must act in order to provide security against attacks by adversaries. A common issue that agents face in such security domains is uncertainty about the ad versaries they may be facing. For example, a security robot may need to make a choice about which areas to patrol, and how often [16]. However, it will not know in advance exactly where a robber will choose to strike. A team of unmanned aerial vehicles (UAVs) [1] monitoring a region undergoing a humanitarian crisis may also need to choose a patrolling policy. They must make this decision without knowing in advance whether terrorists or other adversaries may be waiting to disrupt the mission at a given location. It may indeed be possible to model the motivations of types of adversaries the agent or agent team is likely to face in order to target these adversaries more closely. However, in both cases, the security robot or UAV team will not know exactly which kinds of adversaries may be active on any given day. A common approach for choosing a policy for agents in such scenarios is to model the scenarios as Bayesian games. A Bayesian game is a game in which agents may belong to one or more types; the type of an agent determines its possible actions and payoffs. The distribution of adversary types that an agent will face may be known or inferred from historical data. Usually, these games are analyzed according to the solution concept of a Bayes-Nash equilibrium, an extension of the Nash equilibrium for Bayesian games. However, in many settings, a Nash or Bayes-Nash equilibrium is not an appropriate solution concept, since it assumes that the agents' strategies are chosen simultaneously [5]. In some settings, one player can (or must) commit to a strategy before the other players choose their strategies. These scenarios are known as Stackelberg games [6]. In a Stackelberg game, a leader commits to a strategy first, and then a follower (or group of followers) selfishly optimize their own rewards, considering the action chosen by the leader. For example, the security agent (leader) must first commit to a strategy for patrolling various areas. This strategy could be a mixed strategy in order to be unpredictable to the robbers (followers). The robbers, after observing the pattern of patrols over time, can then choose their strategy (which location to rob). Often, the leader in a Stackelberg game can attain a higher reward than if the strategies were chosen simultaneously. To see the advantage of being the leader in a Stackelberg game, consider a simple game with the payoff table as shown in Table 1. The leader is the row player and the follower is the column player. Here, the leader's payoff is listed first. Table 1: Payoff table for example normal form game. The only Nash equilibrium for this game is when the leader plays 2 and the follower plays 2 which gives the leader a payoff of 2. However, if the leader commits to a uniform mixed strategy of playing 1 and 2 with equal (0.5) probability, the follower's best response is to play 3 to get an expected payoff of 5 (10 and 0 with equal probability). The leader's payoff would then be 4 (3 and 5 with equal probability). In this case, the leader now has an incentive to deviate and choose a pure strategy of 2 (to get a payoff of 5). However, this would cause the follower to deviate to strategy 2 as well, resulting in the Nash equilibrium. Thus, by committing to a strategy that is observed by the follower, and by avoiding the temptation to deviate, the leader manages to obtain a reward higher than that of the best Nash equilibrium. The problem of choosing an optimal strategy for the leader to commit to in a Stackelberg game is analyzed in [5] and found to be NP-hard in the case of a Bayesian game with multiple types of followers. Thus, efficient heuristic techniques for choosing highreward strategies in these games is an important open issue. Methods for finding optimal leader strategies for non-Bayesian games [5] can be applied to this problem by converting the Bayesian game into a normal-form game by the Harsanyi transformation [8]. If, on the other hand, we wish to compute the highest-reward Nash equilibrium, new methods using mixed-integer linear programs (MILPs) [17] may be used, since the highest-reward Bayes-Nash equilibrium is equivalent to the corresponding Nash equilibrium in the transformed game. However, by transforming the game, the compact structure of the Bayesian game is lost. In addition, since the Nash equilibrium assumes a simultaneous choice of strategies, the advantages of being the leader are not considered. This paper introduces an efficient heuristic method for approximating the optimal leader strategy for security domains, known as ASAP (Agent Security via Approximate Policies). This method has three key advantages. First, it directly searches for an optimal strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing it to find high-reward non-equilibrium strategies like the one in the above example. Second, it generates policies with a support which can be expressed as a uniform distribution over a multiset of fixed size as proposed in [12]. This allows for policies that are simple to understand and represent [12], as well as a tunable parameter (the size of the multiset) that controls the simplicity of the policy. Third, the method allows for a Bayes-Nash game to be expressed compactly without conversion to a normal-form game, allowing for large speedups over existing Nash methods such as [17] and [11]. The rest of the paper is organized as follows. In Section 2 we fully describe the patrolling domain and its properties. Section 3 introduces the Bayesian game, the Harsanyi transformation, and existing methods for finding an optimal leader's strategy in a Stackelberg game. Then, in Section 4 the ASAP algorithm is presented for normal-form games, and in Section 5 we show how it can be adapted to the structure of Bayesian games with uncertain adversaries. Experimental results showing higher reward and faster policy computation over existing Nash methods are shown in Section 6, and we conclude with a discussion of related work in Section 7. 2. THE PATROLLING DOMAIN 3. BAYESIAN GAMES 312 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.1 Harsanyi Transformation 3.2 Finding an Optimal Strategy 4. HEURISTIC APPROACHES 4.1 Mixed-Integer Quadratic Program 314 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2 Mixed-Integer Linear Program 5. DECOMPOSITION FOR MULTIPLE ADVERSARIES 5.1 Decomposed MIQP 5.2 Decomposed MILP 6. EXPERIMENTAL RESULTS 316 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4 houses. reward. Results here are for the three-house domain. The trend is that as as the multiset size is increased, the runtime and reward level both increase. Not surprisingly, the reward increases monotonically as the multiset size increases, but what is interesting is that there is relatively little benefit to using a large multiset in this domain. In all cases, the reward given by a multiset of 10 elements was within at least 96% of the reward given by an 80-element multiset. The runtime does not always increase strictly with the multiset size; indeed in one example (scenario 2 with 20 robber types), using a multiset of 10 elements took 1228 seconds, while using 80 elements only took 617 seconds. In general, runtime should increase since a larger multiset means a larger domain for the variables in the MILP, and thus a larger search space. However, an increase in the number of variables can sometimes allow for a policy to be constructed more quickly due to more flexibility in the problem.
An Efficient Heuristic Approach for Security Against Multiple Adversaries ABSTRACT In adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue. This paper focuses on domains where these threats come from unknown adversaries. These domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games. However, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies. In this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium. Previous work has shown this problem of optimal strategy selection to be NP-hard. Therefore, we present a heuristic called ASAP, with three key advantages to address the problem. First, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game. Second, it provides strategies which are simple to understand, represent, and implement. Third, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form. We provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches. 1. INTRODUCTION In many multiagent domains, agents must act in order to provide security against attacks by adversaries. A common issue that agents face in such security domains is uncertainty about the ad versaries they may be facing. However, it will not know in advance exactly where a robber will choose to strike. It may indeed be possible to model the motivations of types of adversaries the agent or agent team is likely to face in order to target these adversaries more closely. However, in both cases, the security robot or UAV team will not know exactly which kinds of adversaries may be active on any given day. A common approach for choosing a policy for agents in such scenarios is to model the scenarios as Bayesian games. A Bayesian game is a game in which agents may belong to one or more types; the type of an agent determines its possible actions and payoffs. The distribution of adversary types that an agent will face may be known or inferred from historical data. Usually, these games are analyzed according to the solution concept of a Bayes-Nash equilibrium, an extension of the Nash equilibrium for Bayesian games. However, in many settings, a Nash or Bayes-Nash equilibrium is not an appropriate solution concept, since it assumes that the agents' strategies are chosen simultaneously [5]. In some settings, one player can (or must) commit to a strategy before the other players choose their strategies. These scenarios are known as Stackelberg games [6]. In a Stackelberg game, a leader commits to a strategy first, and then a follower (or group of followers) selfishly optimize their own rewards, considering the action chosen by the leader. For example, the security agent (leader) must first commit to a strategy for patrolling various areas. This strategy could be a mixed strategy in order to be unpredictable to the robbers (followers). The robbers, after observing the pattern of patrols over time, can then choose their strategy (which location to rob). Often, the leader in a Stackelberg game can attain a higher reward than if the strategies were chosen simultaneously. To see the advantage of being the leader in a Stackelberg game, consider a simple game with the payoff table as shown in Table 1. The leader is the row player and the follower is the column player. Here, the leader's payoff is listed first. Table 1: Payoff table for example normal form game. The only Nash equilibrium for this game is when the leader plays 2 and the follower plays 2 which gives the leader a payoff of 2. However, if the leader commits to a uniform mixed strategy of playing 1 and 2 with equal (0.5) probability, the follower's best response is to play 3 to get an expected payoff of 5 (10 and 0 with equal probability). The leader's payoff would then be 4 (3 and 5 with equal probability). In this case, the leader now has an incentive to deviate and choose a pure strategy of 2 (to get a payoff of 5). However, this would cause the follower to deviate to strategy 2 as well, resulting in the Nash equilibrium. Thus, by committing to a strategy that is observed by the follower, and by avoiding the temptation to deviate, the leader manages to obtain a reward higher than that of the best Nash equilibrium. The problem of choosing an optimal strategy for the leader to commit to in a Stackelberg game is analyzed in [5] and found to be NP-hard in the case of a Bayesian game with multiple types of followers. Thus, efficient heuristic techniques for choosing highreward strategies in these games is an important open issue. Methods for finding optimal leader strategies for non-Bayesian games [5] can be applied to this problem by converting the Bayesian game into a normal-form game by the Harsanyi transformation [8]. However, by transforming the game, the compact structure of the Bayesian game is lost. In addition, since the Nash equilibrium assumes a simultaneous choice of strategies, the advantages of being the leader are not considered. This paper introduces an efficient heuristic method for approximating the optimal leader strategy for security domains, known as ASAP (Agent Security via Approximate Policies). This method has three key advantages. First, it directly searches for an optimal strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing it to find high-reward non-equilibrium strategies like the one in the above example. Third, the method allows for a Bayes-Nash game to be expressed compactly without conversion to a normal-form game, allowing for large speedups over existing Nash methods such as [17] and [11]. In Section 2 we fully describe the patrolling domain and its properties. Section 3 introduces the Bayesian game, the Harsanyi transformation, and existing methods for finding an optimal leader's strategy in a Stackelberg game. Then, in Section 4 the ASAP algorithm is presented for normal-form games, and in Section 5 we show how it can be adapted to the structure of Bayesian games with uncertain adversaries. Experimental results showing higher reward and faster policy computation over existing Nash methods are shown in Section 6, and we conclude with a discussion of related work in Section 7. 4 houses. reward. Results here are for the three-house domain. The trend is that as as the multiset size is increased, the runtime and reward level both increase. Not surprisingly, the reward increases monotonically as the multiset size increases, but what is interesting is that there is relatively little benefit to using a large multiset in this domain. In all cases, the reward given by a multiset of 10 elements was within at least 96% of the reward given by an 80-element multiset. In general, runtime should increase since a larger multiset means a larger domain for the variables in the MILP, and thus a larger search space. However, an increase in the number of variables can sometimes allow for a policy to be constructed more quickly due to more flexibility in the problem.
I-64
Organizational Self-Design in Semi-dynamic Environments
Organizations are an important basis for coordination in multiagent systems. However, there is no best way to organize and all ways of organizing are not equally effective. Attempting to optimize an organizational structure depends strongly on environmental features including problem characteristics, available resources, and agent capabilities. If the environment is dynamic, the environmental conditions or the problem task structure may change over time. This precludes the use of static, design-time generated, organizational structures in such systems. On the other hand, for many real environments, the problems are not totally unique either: certain characteristics and conditions change slowly, if at all, and these can have an important effect in creating stable organizational structures. Organizational-Self Design (OSD) has been proposed as an approach for constructing suitable organizational structures at runtime. We extend the existing OSD approach to include worth-oriented domains, model other resources in addition to only processor resources and build in robustness into the organization. We then evaluate our approach against the contract-net approach and show that our OSD agents perform better, are more efficient, and more flexible to changes in the environment.
[ "organiz self-design", "organ", "coordin", "multiag system", "organiz structur", "robust", "agent spawn", "composit", "task analysi", "environ model", "simul", "extend hierarch task structur", "organiz-self design", "task and resourc alloc" ]
[ "P", "P", "P", "P", "P", "P", "M", "U", "M", "R", "U", "M", "M", "M" ]
Organizational Self-Design in Semi-dynamic Environments Sachin Kamboj ∗ and Keith S. Decker Department of Computer and Information Sciences University of Delaware Newark, DE 19716 {kamboj, decker}@cis. udel.edu ABSTRACT Organizations are an important basis for coordination in multiagent systems. However, there is no best way to organize and all ways of organizing are not equally effective. Attempting to optimize an organizational structure depends strongly on environmental features including problem characteristics, available resources, and agent capabilities. If the environment is dynamic, the environmental conditions or the problem task structure may change over time. This precludes the use of static, design-time generated, organizational structures in such systems. On the other hand, for many real environments, the problems are not totally unique either: certain characteristics and conditions change slowly, if at all, and these can have an important effect in creating stable organizational structures. Organizational-Self Design (OSD) has been proposed as an approach for constructing suitable organizational structures at runtime. We extend the existing OSD approach to include worthoriented domains, model other resources in addition to only processor resources and build in robustness into the organization. We then evaluate our approach against the contract-net approach and show that our OSD agents perform better, are more efficient, and more flexible to changes in the environment. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Algorithms, Design, Performance, Experimentation 1. INTRODUCTION In this paper, we are primarily interested in the organizational design of a multiagent system - the roles enacted by the agents, ∗Primary author is a student the coordination between the roles and the number and assignment of roles and resources to the individual agents. The organizational design is complicated by the fact that there is no best way to organize and all ways of organizing are not equally effective [2]. Instead, the optimal organizational structure depends both on the problem at hand and the environmental conditions under which the problem needs to be solved. The environmental conditions may not be known a priori, or may change over time, which would preclude the use of a static organizational structure. On the other hand, all problem instances and environmental conditions are not always unique, which would render inefficient the use of a new, bespoke organizational structure for every problem instance. Organizational Self-Design (OSD) [4, 10] has been proposed as an approach to designing organizations at run-time in which the agents are responsible for generating their own organizational structures. We believe that OSD is especially suited to the above scenario in which the environment is semi-dynamic as the agents can adapt to changes in the task structures and environmental conditions, while still being able to generate relatively stable organizational structures that exploit the common characteristics across problem instances. In our approach (as in [10]), we define two operators for OSD - agent spawning and composition - when an agent becomes overloaded, it spawns off a new agent to handle part of its task load/responsibility; when an agent lies idle for an extended period of time, it may decide to compose with another agent. We use TÆMS as the underlying representation for our problem solving requests. TÆMS [11] (Task Analysis, Environment Modeling and Simulation) is a computational framework for representing and reasoning about complex task environments in which tasks (problems) are represented using extended hierarchical task structures [3]. The root node of the task structure represents the high-level goal that the agent is trying to achieve. The sub-nodes of a node represent the subtasks and methods that make up the highlevel task. The leaf nodes are at the lowest level of abstraction and represent executable methods - the primitive actions that the agents can perform. The executable methods, themselves, may have multiple outcomes, with different probabilities and different characteristics such as quality, cost and duration. TÆMS also allows various mechanisms for specifying subtask variations and alternatives, i.e. each node in TÆMS is labeled with a characteristic accumulation function that describes how many or which subgoals or sets of subgoals need to be achieved in order to achieve a particular higherlevel goal. TÆMS has been used to model many different problemsolving environments including distributed sensor networks, information gathering, hospital scheduling, EMS, and military planning. [5, 6, 3, 15]. The main contributions of this paper are as follows: 1. We extend existing OSD approaches to use TÆMS as the underlying problem representation, which allows us to model and use OSD for worth-oriented domains. This in turn allows us to reason about (1) alternative task and role assignments that make different quality/cost tradeoffs and generate different organizational structures and (2) uncertainties in the execution of tasks. 2. We model the use of resources other than only processor resources. 3. We incorporate robustness into the organizational structures. 2. RELATED WORK The concept of OSD is not new and has been around since the work of Corkill and Lesser on the DVMT system[4], even though the concept was not fully developed by them. More recently Dignum et. al.[8] have described OSD in the context of the reorganization of agent societies and attempt to classify the various kinds of reorganization possible according to the the reason for reorganization, the type of reorganization and who is responsible for the reorganization decision. According to their scheme, the type of reorganization done by our agents falls into the category of structural changes and the reorganization decision can be described as shared command. Our research primarily builds on the work done by Gasser and Ishida [10], in which they use OSD in the context of a production system in order to perform adaptive work allocation and load balancing. In their approach, they define two organizational primitives - composition and decomposition, which are similar to our organizational primitives for agent spawning and composition. The main difference between their work and our work is that we use TÆMS as the underlying representation for our problems, which allows, firstly, the representation of a larger, more general class of problems and, secondly, quantitative reasoning over task structures. The latter also allows us to incorporate different design-to-criteria schedulers [16]. Horling and Lesser [9] present a different, top-down approach to OSD that also uses TÆMS as the underlying representation. However, their approach assumes a fixed number of agents with designated (and fixed) roles. OSD is used in their work to change the interaction patterns between the agents and results in the agents using different subtasks or different resources to achieve their goals. We also extend on the work done by Sycara et. al.,[13] on Agent Cloning, which is another approach to resource allocation and load balancing. In this approach, the authors present agent cloning as a possible response to agent overload - if an agent detects that it is overloaded and that there are spare (unused) resources in the system, the agent clones itself and gives its clone some part of its task load. Hence, agent cloning can be thought of as akin to agent spawning in our approach. However, the two approaches are different in that there is no specialization of the agents in the formerthe cloned agents are perfect replicas of the original agents and fulfill the same roles and responsibilities as the original agents. In our approach, on the other hand, the spawned agents are specialized on a subpart of the spawning agent``s task structure, which is no longer the responsibility of the spawning agent. Hence, our approach also deals with explicit organization formation and the coordination of the agents'' tasks which are not handled by their approach. Other approaches to OSD include the work of So and Durfee [14], who describe a top-down model of OSD in the context of Cooperative Distributive Problem Solving (CDPS) and Barber and Martin [1], who describe an adaptive decision making framework in which agents are able to reorganize decision-making groups by dynamically changing (1) who makes the decisions for a particular goal and (2) who must carry out these decisions.The latter work is primarily concerned with coordination decisions and can be used to complement our OSD work, which primarily deals with task and resource allocation. 3. TASK AND RESOURCE MODEL To ground our discussion of OSD, we now formally describe our task and resource model. In our model, the primary input to the multi-agent system (MAS) is an ordered set of problem solving requests or task instances, < P1, P2, P3, ..., Pn >, where each problem solving request, Pi, can be represented using the tuple < ti, ai, di >. In this scheme, ti is the underlying TÆMS task structure, ai ∈ N+ is the arrival time and di ∈ N+ is the deadline of the ith task instance1 . The MAS has no prior knowledge about the task ti before the arrival time, ai. In order for the MAS to accrue quality, the task ti must be completed before the deadline, di. Furthermore, every underlying task structure, ti, can be represented using the tuple < T, τ, M, Q, E, R, ρ, C >, where: • T is the set of tasks. The tasks are non-leaf nodes in a TÆMS task structure and are used to denote goals that the agents must achieve. Tasks have a characteristic accumulation function (see below) and are themselves composed of other subtasks and/or methods that need to be achieved in order to achieve the goal represented by that task. Formally, each task Tj can be represented using the pair (qj, sj), where qj ∈ Q and sj ⊂ (T ∪ M). For our convenience, we define two functions SUBTASKS(Task) : T → P(T ∪ M) and SUPERTASKS(TÆMS node) : T ∪ M → P(T), that return the subtasks and supertasks of a TÆMS node respectively2 . • τ ∈ T, is the root of the task structure, i.e. the highest level goal that the organization is trying to achieve. The quality accrued on a problem is equal to the quality of task τ. • M is the set executable methods, i.e., M = {m1, m2, ..., mn}, where each method, mk, is represented using the outcome distribution, {(o1, p1), (o2, p2), ..., (om, pm)}. In the pair (ol, pl), ol is an outcome and pl is the probability that executing mk will result in the outcome ol. Furthermore, each outcome, ol is represented using the triple (ql, cl, dl), where ql is the quality distribution, cl is the cost distribution and dl is the duration distribution of outcome ol. Each discrete distribution is itself a set of pairs, {(n1, p1), (n2, p2), ..., (nn, pn)}, where pi ∈ + is the probability that the outcome will have a quality/cost/duration of nl ∈ N depending on the type of distribution and Pm i=1 pl = 1. • Q is the set of quality/characteristic accumulation functions (CAFs). The CAFs determine how a task group accrues quality given the quality accrued by its subtasks/methods. For our research, we use four CAFs: MIN, MAX, SUM and EXACTLY ONE. See [5] for formal definitions. • E is the set of (non-local) effects. Again, see [5] for formal definitions. • R is the set of resources. • ρ is a mapping from an executable method and resource to the quantity of that resource needed (by an agent) to schedule/execute that method. That is ρ(method, resource) : M × R → N. 1 N is the set of natural numbers including zero and N+ is the set of positive natural numbers excluding zero. 2 P is the power set of set, i.e., the set of all subsets of a set The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1229 • C is a mapping from a resource to the cost of that resource, that is C(resource) : R → N+ We also make the following set of assumptions in our research: 1. The agents in the MAS are drawn from the infinite set A = {a1, a2, a3, ...}. That is, we do not assume a fixed set of agents - instead agents are created (spawned) and destroyed (combined) as needed. 2. All problem solving requests have the same underlying task structure, i.e. ∃t∀iti = t, where t is the task structure of the problem that the MAS is trying to solve. We believe that this assumption holds for many of the practical problems that we have in mind because TÆMS task structures are basically high-level plans for achieving some goal in which the steps required for achieving the goal-as well as the possible contingency situations-have been pre-computed offline and represented in the task structure. Because it represents many contingencies, alternatives, uncertain characteristics and runtime flexible choices, the same underlying task structure can play out very differently across specific instances. 3. All resources are exclusive, i.e., only one agent may use a resource at any given time. Furthermore, we assume that each agent has to own the set of resources that it needseven though the resource ownership can change during the evolution of the organization. 4. All resources are non-consumable. 4. ORGANIZATIONAL SELF DESIGN 4.1 Agent Roles and Relationships The organizational structure is primarily composed of roles and the relationships between the roles. One or more agents may enact a particular role and one or more roles must be enacted by every agent. The roles may be thought of as the parts played by the agents enacting the roles in the solution to the problem and reflect the long-term commitments made by the agents in question to a certain course of action (that includes task responsibility, authority, and mechanisms for coordination). The relationships between the roles are the coordination relationships that exist between the subparts of a problem. In our approach, the organizational design is directly contingent on the task structure and the environmental conditions under which the problems need to be solved. We define a role as a TÆMS subtree rooted at a particular node. Hence, the set (T ∪ M) encompasses the space of all possible roles. Note, by definition, a role may consist of one or more other (sub-) roles as a particular TÆMS node may itself be made up of one or more subtrees. Hence, we will use the terms role, task node and task interchangeably. We, also, differentiate between local and managed (non-local) roles. Local roles are roles that are the sole responsibility of a single agent, that is, the agent concerned is responsible for solving all the subproblems of the tree rooted at that node. For such roles, the agent concerned can do one or more subtasks, solely at its discretion and without consultation with any other agent. Managed roles, on the other hand, must be coordinated between two or more agents as such roles will have two or more descendent local roles that are the responsibility of two or more separate agents. Any of the existing coordination mechanisms (such as GPGP [11]) can be used to achieve this coordination. Formally, if the function TYPE(Agent, TÆMS Node) : A×(T ∪ M) → {Local, Managed, Unassigned}, returns the type of the responsibility of the agent towards the specified role, then TYPE(a, r) = Local ⇐⇒ ∀ri∈SUBTASKS(r)TYPE(a, ri) = Local TYPE(a, r) = Managed ⇐⇒ [∃a1∃r1(r1 ∈ SUBTASKS(r)) ∧ (TYPE(a1, r1) = Managed)] ∨ [∃a2∃a3∃r2∃r3(a2 = a3) ∧ (r2 = r3) ∧ (r2 ∈ SUBTASKS(r)) ∧ (r3 ∈ SUBTASKS(r)) ∧ (TYPE(a2, r2) = Local) ∧ (TYPE(a3, r3) = Local)] 4.2 Organization Formation and Adaptation To form or adapt their organizational structure, the agents use two organizational primitives: agent spawning and composition. These two primitives result in a change in the assignment of roles to the agents. Agent spawning is the generation of a new agent to handle a subset of the roles of the spawning agent. Agent composition, on the other hand, is orthogonal to agent spawning and involves the merging of two or more agents together - the combined agent is responsible for enacting all the roles of the agents being merged. In order to participate in the formation and adaption of an organization, the agents need to explicitly represent and reason about the role assignments. Hence, as a part of its organizational knowledge, each agent keeps a list of the local roles that it is enacting and the non-local roles that it is managing. Note that each agent only has limited organizational knowledge and is individually responsible for spawning off or combining with another agent, as needed, based on its estimate of its performance so far. To see how the organizational primitives work, we first describe four rules that can be thought of as the organizational invariants which will always hold before and after any organizational change: 1. For a local role, all the descendent nodes of that role will be local. TYPE(a, r) = Local =⇒ ∀ri∈SUBTASKS(r)TYPE(a, ri) = Local 2. Similarly, for a managed (non-local) role, all the ascendent nodes of that role will be managed. TYPE(a, r) = Managed =⇒ ∀ri∈SUPERTASKS(r)∃ai(ai ∈ A) ∧ (TYPE(ai, ri) = Managed) 3. If two local roles that are enacted by two different agents share a common ancestor, that ancestor will be a managed role. (TYPE(a1, r1) = Local) ∧ (TYPE(a2, r2) = Local)∧ (a1 = a2) ∧ (r1 = r2) =⇒ ∀ri∈(SUPERTASKS(r1)∩SUPERTASKS(r2))∃ai(ai ∈ A)∧ (TYPE(ai, ri) = Managed) 4. If all the direct descendants of a role are local and the sole responsibility of a single agent, that role will be a local role. ∃a∃r∀ri∈SUBTASKS(r)(a ∈ A) ∧ (r ∈ (T ∪ M))∧ (TYPE(a, ri) = Local) =⇒ (TYPE(a, r) = Local) When a new agent is spawned, the agent doing the spawning will assign one or more of its local roles to the newly spawned agent (Algorithm 1). To preserve invariant rules 2 and 3, the spawning agent will change the type of all the ascendent roles of the nodes assigned to the newly spawned agent from local to managed. Note that the spawning agent is only changing its local organizational knowledge and not the global organizational knowledge. At the 1230 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) same time, the spawning agent is taking on the task of managing the previously local roles. Similarly, the newly spawned agent will only know of its just assigned local roles. When an agent (the composing agent) decides to compose with another agent (the composed agent), the organizational knowledge of the composing agent is merged with the organizational knowledge of the composed agent. To do this, the composed agent takes on the roles of all the local and managed tasks of the composing agent. Care is taken to preserve the organizational invariant rules 1 and 4. Algorithm 1 SpawnAgent(SpawningAgent) : A → A 1: LocalRoles ← {r ⊆ (T ∪ M) | TYPE(SpawningAgent, r)= Local} 2: NewAgent ← CREATENEWAGENT() 3: NewAgentRoles ← FINDROLESFORSPAWNEDAGENT (LocalRoles) 4: for role in NewAgentRoles do 5: TYPE(NewAgent, role) ← Local 6: TYPE(SpawningAgent, role) ← Unassigned 7: PRESERVEORGANIZATIONALINVARIANTS() 8: return NewAgent Algorithm 2 FINDROLESFORSPAWNEDAGENT (SpawningAgentRoles) : (T ∪ M) → (T ∪ M) 1: R ← SpawningAgentRoles 2: selectedRoles ← nil 3: for roleSet in [P(R) − {φ, R}] do 4: if COST(R, roleSet) < COST(R, selectedRoles) then 5: selectedRoles ← roleSet 6: return selectedRoles Algorithm 3 GETRESOURCECOST(Roles) : (T ∪ M) → 1: M ← (Roles ∩ M) 2: cost ← 0 3: for resource in R do 4: maxResourceUsage ← 0 5: for method in M do 6: if ρ(method, resource) > maxResourceUsage then 7: max ← ρ(method, resource) 8: cost ← cost + [C(resource) × maxResourceUsage] 9: return cost 4.2.1 Role allocation during spawning One of the key questions that the agent doing the spawning needs to answer is - which of its local-roles should it assign to the newly spawned agent and which of its local roles should it keep to itself? The onus of answering this question falls on the FINDROLESFORSPAWNEDAGENT() function, shown in Algorithm 2 above. This function takes the set of local roles that are the responsibility of the spawning agent and returns a subset of those roles for allocation to the newly spawned agent. This subset is selected based on the results of a cost function as is evident from line 4 of the algorithm. Since the use of different cost functions will result in different organizational structures and since we have no a priori reason to believe that one cost function will out-perform the other, we evaluated the performance of three different cost functions based on the following three different heuristics: Algorithm 4 GETEXPECTEDDURATION(Roles) : (T ∪ M) → N+ 1: M ← (Roles ∩ M) 2: exptDuration ← 0 3: for [outcome =< (q, c, d), outcomeProb >] in M do 4: exptOutcomeDuration ← 0 5: for (n,p) in d do 6: exptOutcomeDuration ← n × p 7: exptDuration ← exptDuration + [exptOutcomeDuration × outcomeProb] 8: return exptDuration Allocating top-most roles first: This heuristic always breaks up at the top-most nodes first. That is, if the nodes of a task structure were numbered, starting from the root, in a breadth-first fashion, then this heuristic would select the local-role of the spawning agent that had the lowest number and breakup that node (by allocating one of its subtasks to the newly spawned agent). We selected this heuristic because (a) it is the simplest to implement, (b) fastest to run (the role allocation can be done in constant time without the need of a search through the task structure) and (c) it makes sense from a human-organizational perspective as this heuristic corresponds to dividing an organization along functional lines. Minimizing total resources: This heuristic attempts to minimize the total cost of the resources needed by the agents in the organization to execute their roles. If R be the local roles of the spawning agent and R be the subset of roles being evaluated for allocation to the newly spawned agent, the cost function for this heuristic is given by: COST(R, R ) ← GETRESOURCECOST(R − R )+GETRESOURCECOST(R ) Balancing execution time: This heuristic attempts to allocate roles in a way that tries to ensure that each agent has an equal amount of work to do. For each potential role allocation, this heuristic works by calculating the absolute value of the difference between the expected duration of its own roles after spawning and the expected duration of the roles of the newly spawned agent. If this difference is close to zero, then the both the agents have roughly the same amount of work to do. Formally, if R be the local roles of the spawning agent and R be the subset of roles being evaluated for allocation to the newly spawned agent, then the cost function for this heuristic is given by: COST(R, R ) ← |GETEXPECTEDDURATION(R−R )−GETEXPECTEDDURATION(R )| To evaluate these heuristics, we ran a series of experiments that tested the performance of the resultant organization on randomly generated task structures. The results are given in Section 6. 4.3 Reasons for Organizational Change As organizational change is expensive (requiring clock cycles, allocation/deallocation of resources, etc.) we want a stable organizational structure that is suited to the task and environmental conditions at hand. Hence, we wish to change the organizational structure only if the task structure and/or environmental conditions change. Also to allow temporary changes to the environmental conditions to be overlooked, we want the probability of an organizational change to be inversely proportional to the time since the last organizational change. If this time is relatively short, the agents are still adjusting to the changes in the environment - hence the probability of an agent initiating an organizational change should be high. Similarly, if the time since the last organizational change is relatively large, we wish to have a low probability of organizational change. To allow this variation in probability of organizational change, we use simulated annealing to determine the probability of keepThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1231 ing an existing organizational structure. This probability is calculated using the annealing formula: p = e− ΔE kT where ΔE is the amount of overload/underload, T is the time since the last organizational change and k is a constant. The mechanism of computing ΔE is different for agent spawning than for agent composition and is described below. From this formula, if T is large, p, or the probability of keeping the existing organizational structure is large. Note that the value of p is capped at a certain threshold in order to prevent the organization from being too sluggish in its reaction to environmental change. To compute if agent spawning is necessary, we use the annealing equation with ΔE = 1 α∗Slack where α is a constant and Slack is the difference between the total time available for completion of the outstanding tasks and the sum of the expected time required for completion of each task on the task queue. Also, if the amount of Slack is negative, immediate agent spawning will occur without use of the annealing equation. To calculate if agent composition is necessary, we again use the simulated annealing equation. However, in this case, ΔE = β ∗ Idle Time, where β is a constant and Idle Time is the amount of time for which the agent was idle. If the agent has been sitting idle for a long period of time, ΔE is large, which implies that p, the probability of keeping the existing organizational structure, is low. 5. ORGANIZATION AND ROBUSTNESS There are two approaches commonly used to achieve robustness in multiagent systems: 1. the Survivalist Approach [12], which involves replicating domain agents in order to allow the replicas to take over should the original agents fail; and 2. the Citizen Approach [7], which involves the use of special monitoring agents (called Sentinel Agents) in order to detect agent failure and dynamically startup new agents in lieu of the failed ones. The advantage of the survivalist approach is that recovery is relatively fast, since the replicas are pre-existing in the organization and can take over as soon as a failure is detected. The advantages of the citizen approach are that it requires fewer resources, little modification to the existing organizational structure and coordination protocol and is simpler to implement. Both of these approaches can be applied to achieve robustness in our OSD agents and it is not clear which approach would be better. Rather a thorough empirical evaluation of both approaches would be required. In this paper, we present the citizen approach as it has been shown by [7], to have a better performance than the survivalist approach in the Contract Net protocol, and leave the presentation and evaluation of the survivalist approach to a future paper. To implement the citizen approach, we designed special monitoring agents, that periodically poll the domain agents by sending them are you alive messages that the agents must respond to. If an agent fails, it will not respond to such messages - the monitoring agents can then create a new agent and delegate the responsibilities of the dead agent to the new agent. This delegation of responsibilities is non-trivial as the monitoring agents do not have access to the internal state of the domain agents, which is itself composed of two components - the organizational knowledge and the task information. The former consists of the information about the local and managerial roles of the agent while the latter is composed of the methods that are being scheduled and executed and the tasks that have been delegated to other agents. This state information can only be deduced by monitoring and recording the messages being sent and received by the domain agents. For example, in order to deduce the organizational knowledge, the monitoring agents need to keep a track of the spawn and compose messages sent by the agents in order to trigger the spawning and composition operations respectively. The deduction process is particularly complicated in the case of the task information as the monitoring agents do not have access to the private schedules of the domain agents. The details are beyond the scope of this paper. 6. EVALUATION To evaluate our approach, we ran a series of experiments that simulated the operation of both the OSD agents and the Contract Net agents on various task structures with varied arrival rates and deadlines. At the start of each experiment, a random TÆMS task structure was generated with a specified depth and branching factor. During the course of the experiment, a series of task instances (problems) arrive at the organization and must be completed by the agents before their specified deadlines. To directly compare the OSD approach with the Contract Net approach, each experiment was repeated several times - using OSD agents on the first run and a different number of Contract Net agents on each subsequent run. We were careful to use the same task structure, task arrival times, task deadlines and random numbers for each of these trials. We divided the experiments into two groups: experiments in which the environment was static (fixed task arrival rates and deadlines) and experiments in which the environment was dynamic (varying arrival rates and/or deadlines). The two graphs in Figure 1, show the average performance of the OSD organization against the Contract Net organizations with 8, 10, 12 and 14 agents. The results shown are the averages of running 40 experiments. 20 of those experiments had a static environment with a fixed task arrival time of 15 cycles and a deadline window of 20 cycles. The remaining 20 experiments had a varying task arrival rate - the task arrival rate was changed from 15 cycles to 30 cycles and back to 15 cycles after every 20 tasks. In all the experiments, the task structures were randomly generated with a maximum depth of 4 and a maximum branching factor of 3. The runtime of all the experiments was 2500 cycles. We tested several hypotheses relating to the comparative performance of our OSD approach using the Wilcoxon Matched-Pair Signed-Rank tests. Matched-Pair signifies that we are comparing the performance of each system on precisely the same randomized task set within each separate experiment. The tested hypothesis are: The OSD organization requires fewer agents to complete an equal or larger number of tasks when compared to the Contract Net organization: To test this hypothesis, we tested the stronger null hypothesis that states that the contract net agents complete more tasks. This null hypothesis is rejected for all contract net organizations with less than 14 agents (static: p < 0.0003; dynamic: p < 0.03). For large contract net organizations, the number of tasks completed is statistically equivalent to the number completed by the OSD agents, however the number of agents used by the OSD organization is smaller: 9.59 agents (in the static case) and 7.38 agents (in the dynamic case) versus 14 contract net agents3 . Thus the original hypothesis, that OSD requires fewer agents to 3 These values should not be construed as an indication of the scalability of our approach. We have tested our approach on organizations with more than 300 agents, which is significantly greater than the number of agents needed for the kind of applications that we have in mind (i.e. web service choreography, efficient dynamic use of grid computing, distributed information gathering, etc.). 1232 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Graph comparing the average performance of the OSD organization with the Contract Net organizations (with 8, 10, 12 and 14 agents). The error bars show the standard deviations. complete an equal or larger number of tasks, is upheld. The OSD organizations achieve an equal or greater average quality than the Contract Net organizations: The null hypothesis is that the Contract Net agents achieve a greater average quality. We can reject the null hypothesis for contract net organizations with less than 12 agents (static: p < 0.01; dynamic: p < 0.05). For larger contract net organizations, the average quality is statistically equivalent to that achieved by OSD. The OSD agents have a lower average response time as compared to the Contract Net agents: The null hypothesis that OSD has the same or higher response time is rejected for all contract net organizations (static: p < 0.0002; dynamic: p < 0.0004). The OSD agents send less messages than the Contract Net Agents: The null hypothesis that OSD sends the same or more messages is rejected for all contract net organizations (p < .0003 in all cases except 8 contract net agents in a static environment where p < 0.02) Hence, as demonstrated by the above tests, our agents perform better than the contract net agents as they complete a larger number of tasks, achieve a greater quality and also have a lower response time and communication overhead. These results make intuitive sense given our goals for the OSD approach. We expected the OSD organizations to have a faster average response time and to send less messages because the agents in the OSD organization are not wasting time and messages sending bid requests and replying to bids. The quality gained on the tasks is directly dependent on the Criteria/Heuristic BET TF MR Rand Number of Agents 572 567 100 139 No-Org-Changes 641 51 5 177 Total-Messages-Sent 586 499 13 11 Resource-Cost 346 418 337 66 Tasks-Completed 427 560 154 166 Average-Quality 367 492 298 339 Average-Response-Time 356 321 370 283 Average-Runtime 543 323 74 116 Average-Turnaround-Time 560 314 74 126 Table 1: The number of times that each heuristic performed the best or statistically equivalent to the best for each of the performance criteria. Heuristic Key: BET is Balancing Execution Time, TF is Topmost First, MR is Minimizing Resources and Rand is a random allocation strategy, in which every TÆMS node has a uniform probability of being selected for allocation. number of tasks completed, hence the more the number of tasks completed, the greater average quality. The results of testing the first hypothesis were slightly more surprising. It appears that due to the inherent inefficiency of the contract net protocol in bidding for each and every task instance, a greater number of agents are needed to complete an equal number of tasks. Next, we evaluated the performance of the three heuristics for allocating tasks. Some preliminary experiments (that are not reported here due to space constraints) demonstrated the lack of a clear winner amongst the three heuristics for most of the performance criteria that we evaluated. We suspected this to be the case because different heuristics are better for different task structures and environmental conditions, and since each experiment starts with a different random task structure, we couldn``t find one allocation strategy that always dominated the other for all the performance criteria. To determine which heuristic performs the best, given a set of task structures, environmental conditions and performance criteria, we performed a series of experiments that were controlled using the following five variables: • The depth of the task structure was varied from 3 to 5. • The branching factor was varied from 3 to 5. • The probability of any given task node having a MIN CAF was varied from 0.0 to 1.0 in increments of 0.2. The probability of any node having a SUM CAF was in turn modified to ensure that the probabilities add up to 14 . • The arrival rate: from 10 to 40 cycles in increments of 10. • The deadline slack: from 5 to 15 in increments of 5. Each experiment was repeated 20 times, with a new task structure being generated each time - these 20 experiments formed an experimental set. Hence, all the experiments in an experimental set had the same values for the exogenous variables that were used to control the experiment. Note that a static environment was used in each of these experiments, as we wanted to see the performance of the arrival rate and deadline slack on each of the three heuristics. Also the results of any experiment in which the OSD organization consisted of a single agent ware culled from the results. Similarly, 4 Since our preliminary analysis led is to believe that the number of MAX and EXACTLY ONE CAFs in a task structure have a minimal effect on the performance of the allocation strategies being evaluated, we set the probabilities of the MAX and EXACTLY ONE CAFs to 0 in order to reduce the combinatorial explosion of the full factorial experimental design. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1233 experiments in which the generated task structures were unsatisfiable (given the deadline constraints), were removed from the final results. If any experimental set had more than 15 experiments thus removed, the whole set was ignored for performing the evaluation. The final evaluation was done on 673 experimental sets. We tested the potential of these three heuristics on the following performance criteria: 1. The average number of agents used. 2. The total number of organizational changes. 3. The total messages sent by all the agents. 4. The total resource cost of the organization. 5. The number of tasks completed. 6. The average quality accrued. The average quality is defined as the total quality accrued during the experimental run divided by the sum of the number of tasks completed and the number of tasks failed. 7. The average response time of the organization. The response time of a task is defined as the difference between the time at which any agent in the organization starts working on the task (the start time) and the time at which the task was generated (the generation time). Hence, the response time is equivalent to the wait time. For tasks that are never attempted/started, the response time is set at final runtime minus the generation time. 8. The average runtime of the tasks attempted by the organization. This time is defined as the difference between the time at which the task completed or failed and the start time. For tasks that were never stated, this time is set to zero. 9. The turnaround time is defined as the sum of the response time and runtime of a task. Except for the number of tasks completed and the average quality accrued, lower values for the various performance criteria indicate better performance. Again we ran the Wilcoxon Matched-Pair Signed-Rank tests on the experiments in each of the experimental sets. The null hypothesis in each case was that there is no difference between the pair of heuristics for the performance criteria under consideration. We were interested in the cases in which we could reject the null hypothesis with 95% confidence (p < 0.05). We noted the number of times that a heuristic performed the best or was in a group that performed statistically better than the rest. These counts are given in Tables 1 and 2. The number of experimental sets in which each heuristic performed the best or statistically equivalent to the best is shown in Table 1. The breakup of these numbers into (1) the number of times that each heuristic performed better than all the other heuristics and (2) the number of times each heuristic was statistically equivalent to another group of heuristics, all of which performed the best, is shown in Table 2. Both of these tables allow us to glean important information about the performance of the three heuristics. Particularly interesting were the following results: • Whereas Balancing Execution Time (BET) used the lowest number of agents in largest number of experimental sets (572), in most of these cases (337 experimental sets) it was statistically equivalent to Topmost First (TF). When these two heuristics didn``t perform equally, there was an almost even split between the number of experimental sets in which one outperformed the other. We believe this was the case because BET always bifurcates the agents into two agents that have a more or less equal task load. This often results in organizations that have an even Figure 2: Graph demonstrating the robustness of the citizen approach. The baseline shows the number of tasks completed in the absence of any failure. number of agents - none of which are small5 enough to combine into a larger agent. With TF, on the other hand, a large agent can successively spawn off smaller agents until it and the spawned agents are small enough to complete their tasks before the deadlines - this often results in organizations with an odd number of agents that is less than those used by BET. • As expected, BET achieved the lowest number of organizational changes in the largest number of experimental sets. In fact, it was over ten times as good as its second best competitor (TF). This shows that if the agents are conscientious in their initial task allocation, there is a lesser need for organizational change later on, especially for static environments. • A particularly interesting, yet easily explainable, result was that of the average response time. We found that the Minimizing Resources (MR) heuristic performed the best when it came to minimizing the average response time! This can be explained by the fact the MR heuristic is extremely greedy and prefers to spawn off small agents that have a tiny resource footprint (so as to minimize the total increase in the resource cost to the organization at the time of spawning). Whereas most of these small agents might compose with other agents over time, the presence of a single small agent is sufficient to reduce the response time. In fact the MR heuristic is not the most effective heuristic when it comes to minimizing the resource-cost of the organization - in fact, it only outperforms a random task/resource allocation. We believe this is in part due to the greedy nature of this heuristic and in part because of the fact that all spawning and composition operations only use local information. We believe that using some non-local information about the resource allocation might help in making better decisions, something that we plan to look at in the future. Finally we evaluated the performance of the citizens approach to robustness as applied to our OSD mechanism (Figure 2). As expected, as the probability of failure increases, the number of agents failing during a run also increases. This results in a slight decrease in the number of tasks completed, which can be explained by the fact that whenever an agent fails, its looses whatever work it was doing at the time. The newly created agent that fills in for the failed 5 For this discussion small agents are agents that have a low expected duration for their local roles (as calculated by Algorithm 4). 1234 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Criteria/Heuristic BET TF MR Rand BET+TF BET+Rand MR+Rand TF+MR BET+TF+MR All Number of Agents 94 88 3 7 337 2 0 0 12 85 No-Org-Changes 480 0 0 29 16 113 0 0 0 5 Total-Messages-Sent 170 85 0 2 399 1 0 0 7 5 Resource-Cost 26 100 170 42 167 0 7 6 128 15 Tasks-Completed 77 197 4 28 184 1 3 9 36 99 Average-Quality 38 147 26 104 76 0 11 11 34 208 Average-Response-Time 104 74 162 43 31 20 16 8 7 169 Average-Runtime 322 110 0 12 121 13 1 1 1 69 Average-Turnaround-Time 318 94 1 11 125 26 1 0 7 64 Table 2: Table showing the number of times that each individual heuristic performed the best and the number of times that a certain group of statistically equivalent heuristics performed the best. Only the more interesting heuristic groupings are shown. All shows the number of experimental sets in which there was no statistical difference between the three heuristics and a random allocation strategy one must redo the work, thus wasting precious time which might not be available close to a deadline. As a part of our future research, we wish to, firstly, evaluate the survivalist approach to robustness. The survivalist approach might actually be better than the citizen approach for higher probabilities of agent failure, as the replicated agents may be processing the task structures in parallel and can take over the moment the original agents fail - thus saving time around tight deadlines. Also, we strongly believe that the optimal organizational structure may vary, depending on the probability of failure and the desired level of robustness. For example, one way of achieving a higher level of robustness in the survivalist approach, given a large numbers of agent failures, would be to relax the task deadlines. However, such a relaxation would result in the system using fewer agents in order to conserve resources, which in turn would have a detrimental effect on the robustness. Therefore, towards this end, we have begun exploring the robustness properties of task structures and the ways in which the organizational design can be modified to take such properties into account. 7. CONCLUSION In this paper, we have presented a run-time approach to organization in which the agents use Organizational Self-Design to come up with a suitable organizational structure. We have also evaluated the performance of the organizations generated by the agents following our approach with the bespoke organization formation that takes place in the Contract Net protocol and have demonstrated that our approach is better than the Contract Net approach as evident by the larger number of tasks completed, larger quality achieved and lower response time. Finally, we tested the performance of three different resource allocation heuristics on various performance metrics and also evaluated the robustness of our approach. 8. REFERENCES [1] K. S. Barber and C. E. Martin. Dynamic reorganization of decision-making groups. In AGENTS ``01, pages 513-520, New York, NY, USA, 2001. [2] K. M. Carley and L. Gasser. Computational organization theory. In G. Wiess, editor, Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, pages 299-330, MIT Press, 1999. [3] W. Chen and K. S. Decker. The analysis of coordination in an information system application - emergency medical services. In Lecture Notes in Computer Science (LNCS), number 3508, pages 36-51. Springer-Verlag, May 2005. [4] D. Corkill and V. Lesser. The use of meta-level control for coordination in a distributed problem solving network. Proceedings of the Eighth International Joint Conference on Artificial Intelligence, pages 748-756, August 1983. [5] K. S. Decker. Environment centered analysis and design of coordination mechanisms. Ph.D.. Thesis, Dept. of Comp. Science, University of Massachusetts, Amherst, May 1995. [6] K. S. Decker and J. Li. Coordinating mutually exclusive resources using GPGP. Autonomous Agents and Multi-Agent Systems, 3(2):133-157, 2000. [7] C. Dellarocas and M. Klein. An experimental evaluation of domain-independent fault handling services in open multi-agent systems. Proceedings of the International Conference on Multi-Agent Systems (ICMAS-2000), July 2000. [8] V. Dignum, F. Dignum, and L. Sonenberg. Towards Dynamic Reorganization of Agent Societies. In Proceedings of CEAS: Workshop on Coordination in Emergent Agent Societies at ECAI, pages 22-27, Valencia, Spain, September 2004. [9] B. Horling, B. Benyo, and V. Lesser. Using self-diagnosis to adapt organizational structures. In AGENTS ``01, pages 529-536, New York, NY, USA, 2001. ACM Press. [10] T. Ishida, L. Gasser, and M. Yokoo. Organization self-design of distributed production systems. IEEE Transactions on Knowledge and Data Engineering, 4(2):123-134, 1992. [11] V. R. Lesser et. al.. Evolution of the gpgp/tæms domain-independent coordination framework. Autonomous Agents and Multi-Agent Systems, 9(1-2):87-143, 2004. [12] O. Marin, P. Sens, J. Briot, and Z. Guessoum. Towards adaptive fault tolerance for distributed multi-agent systems. Proceedings of ERSADS 2001, May 2001. [13] O. Shehory, K. Sycara, et. al.. Agent cloning: an approach to agent mobility and resource allocation. IEEE Communications Magazine, 36(7):58-67, 1998. [14] Y. So and E. Durfee. An organizational self-design model for organizational change. In AAAI-93 Workshop on AI and Theories of Groups and Organizations, pages 8-15, Washington, D.C., July 1993. [15] T. Wagner. Coordination decision support assistants (coordinators). Technical Report 04-29, BAA, 2004. [16] T. Wagner and V. Lesser. Design-to-criteria scheduling: Real-time agent control. Proc. of AAAI 2000 Spring Symposium on Real-Time Autonomous Systems, 89-96. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1235
Organizational Self-Design in Semi-dynamic Environments ABSTRACT In this paper we propose a run-time approach to organization that is contingent on the task structure of the problem being solved and the environmental conditions under which it is being solved. We use T1EMS as the underlying representation for our problems and describe a framework that uses Organizational Self-Design (OSD) to allocate tasks and resources to the agents and coordinate their activities. 1. INTRODUCTION In this paper, we are primarily interested in the organizational design of a multiagent system--the roles enacted by the agents, the coordination between the roles and the number and assignment of roles and resources to the individual agents. The organizational design is complicated by the fact that there is no best way to organize and all ways of organizing are not equally effective [1]. Instead, the optimal organizational structure depends both on the problem at hand and the environmental conditions under which the problem needs to be solved. The environmental conditions may not be known a priori or may change over time, which would preclude the use of a static organizational structure. On the other hand, all problem instances and environmental conditions are not always unique which would rule out the use of a new, bespoke organizational structure for every problem instance. In our approach we use Organizational Self-Design (OSD) to dynamically alter the organizational structure of the agents. We define two operators for OSD--agent spawning and composition--when an agent becomes overloaded, it spawns off a new agent to handle part of its task load/responsibility; when an agent lies idle for an extended period of time, it may decide to compose with another (underloaded) agent. Our work builds on the work by [2]. The primary difference between their work and our work is that we use T1EMS [3] as the underlying representation for our problems. T1EMS is a computational framework that uses annotated hierarchical task networks (HTNs) to allow quantitative reasoning over the task structures. T1EMS allows us to explicitly reason about alternative ways of doing a task, arbitrary ways of combining subtasks, uncertainties, quality/cost tradeoffs, and non-local effects and is hence more general than the approach used by [2]. 2. ORGANIZATIONAL SELF DESIGN 2.1 Agent Roles and Relationships As explained in Section 1, the organizational structure is primarily composed of roles and the relationships between the roles. One or more agents may enact a particular role and one or more roles must be enacted by every agent. The roles may be thought of as the parts played by the agents enacting the roles in the solution to the problem and reflect the long-term commitments made by the agents in question to a certain course of action (that includes task responsibility, authority, and mechanisms for coordination). The relationships between the roles are the coordination relationships that exist between the subparts of a problem. In our approach, the organizational design is directly contingent on the task structure and the environmental conditions under which the problems need to be solved. We define a role as a T1EMS subtree rooted at a particular node. Note, by definition, a role may consist of one or more other (sub -) roles as a particular T1EMS node may itself be made up of one or more subtrees. Hence, we will use the terms role, task node and task interchangeably. We, also, differentiate between local and managed (nonlocal) roles. Local roles are roles that are the sole responsibility of a single agent, that is, the agent concerned is responsible for solving all the subproblems of the tree rooted at that node. For such roles, the agent concerned can do one or more subtasks, solely at its discretion and without consultation with any other agent. Managed roles, on the other hand, must be coordinated between two or more agents as such roles will have two or more descendent local roles that are the responsibility of two or more separate agents. We achieve this coordination by assigning one of the agents as the manager responsible for enacting the non-local role. This manager is responsible for making the coordination decisions, often in consultation with the agents enacting the descendent sub-roles of that particular non-local role. 2.2 Organization Formation and Adaptation To form or adapt their organizational structure, the agents use two organizational primitives: agent spawning and composition. These two primitives result in a change in the assignment of roles to the agents. Agent spawning is the generation of a new agent to handle a subset of the roles of the spawning agent. Agent composition, on the other hand, is orthogonal to agent spawning and involves the merging of two or more agents together--the combined agent is responsible for enacting all the roles of the agents being merged. Hence, OSD can be thought of as a search in the space of all the role assignments for a suitable role assignment that minimizes or maximizes a performance measure In order to participate in the formation and adaption of an organization, the agents need to explicitly represent and reason about the role assignments. Hence, as a part of its organizational knowledge, each agent keeps a list of the local roles that it is enacting and the non-local roles that it is managing. Note that each agent only has limited organizational knowledge and is individually responsible for spawning off or combining with another agent, as needed, based on its estimate of its performance so far. To see how the organizational primitives work, we first describe four rules that can be thought of as the organizational invariants which will always hold before and after any organizational change: 1. For a local role, all the descendent nodes of that role will be local. 2. Similarly, for a managed (non-local) role, all the ascendent nodes of that role will be managed. 3. If two local roles that are enacted by two different agents share a common ancestor, that ancestor will be a managed role. 4. If all the direct descendants of a role are local and the sole responsibility of a single agent, that role will be a local role. When a new agent is spawned, the agent doing the spawning will assign one or more of its local roles to the newly spawned agent. To preserve invariant rules 2 and 3, the spawning agent will change the type of all the ascendent roles of the nodes assigned to the newly spawned agent from local to managed. Note that the spawning agent is only changing its local organizational knowledge and not the global organizational knowledge. At the same time, the spawning agent is taking on the task of managing the previously local roles. Similarly, the newly spawned agent will only know of its just assigned local roles. When an agent (the composing agent) decides to compose with another agent (the composed agent), the organizational knowledge of the composing agent is merged with the organizational knowledge of the composed agent. To do this, the composed agent takes on the roles of all the local and managed tasks of the composing agent. Care is taken to preserve the organizational invariant rules 1 and 4. 2.3 Reasons for Organizational Change As organizational change is expensive (requiring clock cycles, allocation/deallocation of resources, etc.) we want a stable organizational structure that is suited to the task and environmental conditions at hand. Hence, we wish to change the organizational structure only if the task structure and/or environmental conditions change. Also to allow temporary changes to the environmental conditions to be overlooked, we want the probability of an organizational change to be inversely proportional to the time since the last organizational change. If this time is relatively short, the agents are still adjusting to the changes in the environment - hence the probability of an agent initiating an organizational change should be high. Similarly, if the time since the last organizational change is relatively large, we wish to have a low probability of organizational change. To allow this variation in probability of organizational change, we use simulated annealing to determine the probability of keeping an existing organizational structure. This probability is calculated using the annealing formula: p = e − ΔE kT where ΔE is the "amount" of overload/underload, T is the time since the last organizational change and k is a constant. The mechanism of computing ΔE is different for agent spawning than for agent composition and is described below. From this formula, if T is large, p, or the probability of keeping the existing organizational structure is large. Agent spawning only occurs when the agent doing the spawning is too overloaded and cannot complete all the tasks in its task queue by the given deadlines of the tasks. To compute if spawning is necessary, we use the annealing equation with ΔE = 1 α ∗ Slack where α is a constant and Slack is the difference between the total time available for completion of the outstanding tasks and the sum of the expected time required for completion of each task on the task queue. Agent composition, on the other hand, is exactly orthogonal to agent spawning as agent composition only occurs when the agents are underloaded. In such a situation, some of the agents will be sitting idle waiting for tasks to arrive. These idle agents will either be utilizing resources while waiting, or more likely, will have resources allocated to them that could be used elsewhere in the system. In either case, it makes sense to combine some of the idle agents with other agents freeing precious resources. To calculate if agent composition is necessary, we again use the simulated annealing equation. However, in this case, ΔE = β ∗ Idle Time, where β is a constant and Idle Time is the amount of time for which the agent was idle. If the agent has been sitting idle for a long period of time, ΔE is large, which implies that p, the probability of keeping the existing organizational structure, is low. 3. EVALUATION To evaluate our approach, we ran a series of experiments that simulated the operation of both the OSD agents and the Contract Net agents on various task structures with varied arrival rates and deadlines. At the start of each experiment, a random TÆMS task structure was generated with a specified depth and branching factor. During the course of the experiment, a series of task instances arrive at the organization and must be completed by the agents before their specified deadlines. To directly compare the OSD approach with the Contract Net approach, each experiment was repeated several times--using OSD agents on the first run and a different number of Contract Net agents on each subsequent run. We were careful to use the same task structure, task arrival times, task deadlines and random numbers for each of these trials. We divided the experiments into two groups: experiments in which the environment was static (fixed task arrival rates and deadlines) and experiments in which the environment was dynamic (varying arrival rates and/or deadlines). The two graphs in Figure 1, show the average performance of the OSD organization against the Contract Net organizations with 8, 10, 12 and 14 agents. The results shown are the averages of running 40 experiments. 20 of those experiments had a static environment with a fixed task arrival time of 15 cycles and a deadline window of 20 cycles. The remaining 20 experiments had a varying task arrival rate the task arrival rate was changed from 15 cycles to 30 cycles and back to 15 cycles after every 20 tasks. In all the experiments, the task structures were randomly generated with a maximum depth of 4 and a maximum branching factor of 3. The runtime of all the experiments was 2500 cycles. We tested several hypotheses relating to the comparative performance of our OSD approach using theWilcoxon Matched-Pair Signed-Rank tests. Matched-Pair signifies that we are comparing the performance of each system on precisely the same randomized task set within each separate experiment. The tested hypothesis are: The OSD organization requires fewer agents to complete an equal or larger number of tasks when compared to the Contract Net organization: To test this hypothesis, we tested the stronger null hypothesis that states that the contract net agents complete more tasks. This null hypothesis is rejected for all contract net organizations with less than 14 agents (static: p <0.0003; dynamic: p <0.03). For large contract net organizations, the number of tasks completed is statistically equivalent to the number completed by the OSD agents, however the number of agents used by the OSD organization is smaller: 9.59 agents (in the static case) and 7.38 agents (in the dynamic case) versus 14 contract net agents. Thus the original hypothesis, that OSD requires fewer agents to complete an equal or larger number of tasks, is upheld. The OSD organizations achieve an equal or greater average quality than the Contract Net organizations: The null hypothesis is that the Contract Net agents achieve a greater average quality. We can reject the null hypothesis for contract net organizations with less than 12 agents (static: p <0.01; dynamic: p <0.05). For larger contract net organizations, the average quality is statistically equivalent to that achieved by OSD. The OSD agents have a lower average response time as compared to the Contract Net agents: The null hypothesis that OSD has the same or higher response time is rejected for all contract net organizations (static: p <0.0002; dynamic: p <0.0004). The OSD agents send less messages than the Contract Net Agents: The null hypothesis that OSD sends the same or more messages is rejected for all contract net organizations (p <.0003 in all cases except 8 contract net agents in a static environment where p <0.02) Hence, as demonstrated by the above tests, our agents perform better than the contract net agents as they complete a larger number of tasks, achieve a greater quality and also Figure 1: Graph comparing the average perfor mance of the OSD organization with the Contract Net organizations (with 8, 10, 12 and 14 agents). The error bars show the standard deviations. have a lower response time and communication overhead. These results make intuitive sense given our goals for the OSD approach. We expected the OSD organizations to have a faster average response time and to send less messages because the agents in the OSD organization are not wasting time and messages sending bid requests and replying to bids. The quality gained on the tasks is directly dependent on the number of tasks completed, hence the more the number of tasks completed, the greater average quality. The results of testing the first hypothesis were slightly more surprising. It appears that due to the inherent inefficiency of the contract net protocol in bidding for each and every task instance, a greater number of agents are needed to complete an equal number of tasks.
Organizational Self-Design in Semi-dynamic Environments ABSTRACT In this paper we propose a run-time approach to organization that is contingent on the task structure of the problem being solved and the environmental conditions under which it is being solved. We use T1EMS as the underlying representation for our problems and describe a framework that uses Organizational Self-Design (OSD) to allocate tasks and resources to the agents and coordinate their activities. 1. INTRODUCTION In this paper, we are primarily interested in the organizational design of a multiagent system--the roles enacted by the agents, the coordination between the roles and the number and assignment of roles and resources to the individual agents. The organizational design is complicated by the fact that there is no best way to organize and all ways of organizing are not equally effective [1]. Instead, the optimal organizational structure depends both on the problem at hand and the environmental conditions under which the problem needs to be solved. The environmental conditions may not be known a priori or may change over time, which would preclude the use of a static organizational structure. On the other hand, all problem instances and environmental conditions are not always unique which would rule out the use of a new, bespoke organizational structure for every problem instance. In our approach we use Organizational Self-Design (OSD) to dynamically alter the organizational structure of the agents. We define two operators for OSD--agent spawning and composition--when an agent becomes overloaded, it spawns off a new agent to handle part of its task load/responsibility; when an agent lies idle for an extended period of time, it may decide to compose with another (underloaded) agent. Our work builds on the work by [2]. The primary difference between their work and our work is that we use T1EMS [3] as the underlying representation for our problems. T1EMS is a computational framework that uses annotated hierarchical task networks (HTNs) to allow quantitative reasoning over the task structures. T1EMS allows us to explicitly reason about alternative ways of doing a task, arbitrary ways of combining subtasks, uncertainties, quality/cost tradeoffs, and non-local effects and is hence more general than the approach used by [2]. 2. ORGANIZATIONAL SELF DESIGN 2.1 Agent Roles and Relationships 2.2 Organization Formation and Adaptation 2.3 Reasons for Organizational Change 3. EVALUATION To evaluate our approach, we ran a series of experiments that simulated the operation of both the OSD agents and the Contract Net agents on various task structures with varied arrival rates and deadlines. At the start of each experiment, a random TÆMS task structure was generated with a specified depth and branching factor. During the course of the experiment, a series of task instances arrive at the organization and must be completed by the agents before their specified deadlines. To directly compare the OSD approach with the Contract Net approach, each experiment was repeated several times--using OSD agents on the first run and a different number of Contract Net agents on each subsequent run. We were careful to use the same task structure, task arrival times, task deadlines and random numbers for each of these trials. We divided the experiments into two groups: experiments in which the environment was static (fixed task arrival rates and deadlines) and experiments in which the environment was dynamic (varying arrival rates and/or deadlines). The two graphs in Figure 1, show the average performance of the OSD organization against the Contract Net organizations with 8, 10, 12 and 14 agents. The results shown are the averages of running 40 experiments. 20 of those experiments had a static environment with a fixed task arrival time of 15 cycles and a deadline window of 20 cycles. The remaining 20 experiments had a varying task arrival rate the task arrival rate was changed from 15 cycles to 30 cycles and back to 15 cycles after every 20 tasks. In all the experiments, the task structures were randomly generated with a maximum depth of 4 and a maximum branching factor of 3. The runtime of all the experiments was 2500 cycles. We tested several hypotheses relating to the comparative performance of our OSD approach using theWilcoxon Matched-Pair Signed-Rank tests. Matched-Pair signifies that we are comparing the performance of each system on precisely the same randomized task set within each separate experiment. The tested hypothesis are: The OSD organization requires fewer agents to complete an equal or larger number of tasks when compared to the Contract Net organization: To test this hypothesis, we tested the stronger null hypothesis that states that the contract net agents complete more tasks. This null hypothesis is rejected for all contract net organizations with less than 14 agents (static: p <0.0003; dynamic: p <0.03). For large contract net organizations, the number of tasks completed is statistically equivalent to the number completed by the OSD agents, however the number of agents used by the OSD organization is smaller: 9.59 agents (in the static case) and 7.38 agents (in the dynamic case) versus 14 contract net agents. Thus the original hypothesis, that OSD requires fewer agents to complete an equal or larger number of tasks, is upheld. The OSD organizations achieve an equal or greater average quality than the Contract Net organizations: The null hypothesis is that the Contract Net agents achieve a greater average quality. We can reject the null hypothesis for contract net organizations with less than 12 agents (static: p <0.01; dynamic: p <0.05). For larger contract net organizations, the average quality is statistically equivalent to that achieved by OSD. The OSD agents have a lower average response time as compared to the Contract Net agents: The null hypothesis that OSD has the same or higher response time is rejected for all contract net organizations (static: p <0.0002; dynamic: p <0.0004). The OSD agents send less messages than the Contract Net Agents: The null hypothesis that OSD sends the same or more messages is rejected for all contract net organizations (p <.0003 in all cases except 8 contract net agents in a static environment where p <0.02) Hence, as demonstrated by the above tests, our agents perform better than the contract net agents as they complete a larger number of tasks, achieve a greater quality and also Figure 1: Graph comparing the average perfor mance of the OSD organization with the Contract Net organizations (with 8, 10, 12 and 14 agents). The error bars show the standard deviations. have a lower response time and communication overhead. These results make intuitive sense given our goals for the OSD approach. We expected the OSD organizations to have a faster average response time and to send less messages because the agents in the OSD organization are not wasting time and messages sending bid requests and replying to bids. The quality gained on the tasks is directly dependent on the number of tasks completed, hence the more the number of tasks completed, the greater average quality. The results of testing the first hypothesis were slightly more surprising. It appears that due to the inherent inefficiency of the contract net protocol in bidding for each and every task instance, a greater number of agents are needed to complete an equal number of tasks.
Organizational Self-Design in Semi-dynamic Environments ABSTRACT In this paper we propose a run-time approach to organization that is contingent on the task structure of the problem being solved and the environmental conditions under which it is being solved. We use T1EMS as the underlying representation for our problems and describe a framework that uses Organizational Self-Design (OSD) to allocate tasks and resources to the agents and coordinate their activities. 1. INTRODUCTION Instead, the optimal organizational structure depends both on the problem at hand and the environmental conditions under which the problem needs to be solved. The environmental conditions may not be known a priori or may change over time, which would preclude the use of a static organizational structure. On the other hand, all problem instances and environmental conditions are not always unique which would rule out the use of a new, bespoke organizational structure for every problem instance. In our approach we use Organizational Self-Design (OSD) to dynamically alter the organizational structure of the agents. T1EMS is a computational framework that uses annotated hierarchical task networks (HTNs) to allow quantitative reasoning over the task structures. 3. EVALUATION To evaluate our approach, we ran a series of experiments that simulated the operation of both the OSD agents and the Contract Net agents on various task structures with varied arrival rates and deadlines. At the start of each experiment, a random TÆMS task structure was generated with a specified depth and branching factor. During the course of the experiment, a series of task instances arrive at the organization and must be completed by the agents before their specified deadlines. To directly compare the OSD approach with the Contract Net approach, each experiment was repeated several times--using OSD agents on the first run and a different number of Contract Net agents on each subsequent run. We were careful to use the same task structure, task arrival times, task deadlines and random numbers for each of these trials. We divided the experiments into two groups: experiments in which the environment was static (fixed task arrival rates and deadlines) and experiments in which the environment was dynamic (varying arrival rates and/or deadlines). The two graphs in Figure 1, show the average performance of the OSD organization against the Contract Net organizations with 8, 10, 12 and 14 agents. The results shown are the averages of running 40 experiments. 20 of those experiments had a static environment with a fixed task arrival time of 15 cycles and a deadline window of 20 cycles. The remaining 20 experiments had a varying task arrival rate the task arrival rate was changed from 15 cycles to 30 cycles and back to 15 cycles after every 20 tasks. In all the experiments, the task structures were randomly generated with a maximum depth of 4 and a maximum branching factor of 3. The runtime of all the experiments was 2500 cycles. We tested several hypotheses relating to the comparative performance of our OSD approach using theWilcoxon Matched-Pair Signed-Rank tests. Matched-Pair signifies that we are comparing the performance of each system on precisely the same randomized task set within each separate experiment. The tested hypothesis are: The OSD organization requires fewer agents to complete an equal or larger number of tasks when compared to the Contract Net organization: To test this hypothesis, we tested the stronger null hypothesis that states that the contract net agents complete more tasks. This null hypothesis is rejected for all contract net organizations with less than 14 agents (static: p <0.0003; dynamic: p <0.03). For large contract net organizations, the number of tasks completed is statistically equivalent to the number completed by the OSD agents, however the number of agents used by the OSD organization is smaller: 9.59 agents (in the static case) and 7.38 agents (in the dynamic case) versus 14 contract net agents. Thus the original hypothesis, that OSD requires fewer agents to complete an equal or larger number of tasks, is upheld. The OSD organizations achieve an equal or greater average quality than the Contract Net organizations: The null hypothesis is that the Contract Net agents achieve a greater average quality. We can reject the null hypothesis for contract net organizations with less than 12 agents (static: p <0.01; dynamic: p <0.05). For larger contract net organizations, the average quality is statistically equivalent to that achieved by OSD. The OSD agents have a lower average response time as compared to the Contract Net agents: The null hypothesis that OSD has the same or higher response time is rejected for all contract net organizations (static: p <0.0002; dynamic: p <0.0004). Figure 1: Graph comparing the average perfor mance of the OSD organization with the Contract Net organizations (with 8, 10, 12 and 14 agents). The error bars show the standard deviations. have a lower response time and communication overhead. These results make intuitive sense given our goals for the OSD approach. We expected the OSD organizations to have a faster average response time and to send less messages because the agents in the OSD organization are not wasting time and messages sending bid requests and replying to bids. The quality gained on the tasks is directly dependent on the number of tasks completed, hence the more the number of tasks completed, the greater average quality. The results of testing the first hypothesis were slightly more surprising. It appears that due to the inherent inefficiency of the contract net protocol in bidding for each and every task instance, a greater number of agents are needed to complete an equal number of tasks.
I-70
A Multi-Agent System for Building Dynamic Ontologies
Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning. Our system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture. In this paper we present a distributed hierarchical clustering algorithm, core of our approach. It is evaluated and compared to a more conventional centralized algorithm. We also present how it has been improved using a multi-criteria approach. With those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution.
[ "ontolog", "dynamo", "cooper", "emerg behavior", "multi-agent field", "quantit evalu", "black-box", "parent adequaci function", "hepat", "terminolog rich", "model qualiti", "dynam equilibrium" ]
[ "P", "P", "U", "U", "U", "M", "U", "U", "U", "U", "U", "M" ]
A Multi-Agent System for Building Dynamic Ontologies Kévin Ottens ∗ IRIT, Université Paul Sabatier 118 Route de Narbonne F-31062 TOULOUSE ottens@irit.fr Marie-Pierre Gleizes IRIT, Université Paul Sabatier 118 Route de Narbonne F-31062 TOULOUSE gleizes@irit.fr Pierre Glize IRIT, Université Paul Sabatier 118 Route de Narbonne F-31062 TOULOUSE glize@irit.fr ABSTRACT Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning. Our system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture. In this paper we present a distributed hierarchical clustering algorithm, core of our approach. It is evaluated and compared to a more conventional centralized algorithm. We also present how it has been improved using a multi-criteria approach. With those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceMultiagent Systems General Terms Algorithms, Experimentation 1. INTRODUCTION Nowadays, it is well established that ontologies are needed for semantic web, knowledge management, B2B... For knowledge management, ontologies are used to annotate documents and to enhance the information retrieval. But building an ontology manually is a slow, tedious, costly, complex and time consuming process. Currently, a real challenge lies in building them automatically or semi-automatically and keeping them up to date. It would mean creating dynamic ontologies [10] and it justifies the emergence of ontology learning techniques [14] [13]. Our research focuses on Dynamo (an acronym of DYNAMic Ontologies), a tool based on an adaptive multi-agent system to construct and maintain an ontology from a domain specific set of texts. Our aim is not to build an exhaustive, general hierarchical ontology but a domain specific one. We propose a semi-automated tool since an external resource is required: the ``ontologist''. An ontologist is a kind of cognitive engineer, or analyst, who is using information from texts and expert interviews to design ontologies. In the multi-agent field, ontologies generally enable agents to understand each other [12]. They``re sometimes used to ease the ontology building process, in particular for collaborative contexts [3], but they rarely represent the ontology itself [16]. Most works interested in the construction of ontologies [7] propose the refinement of ontologies. This process consists in using an existing ontology and building a new one from it. This approach is different from our approach because Dynamo starts from scratch. Researchers, working on the construction of ontologies from texts, claim that the work to be automated requires external resources such as a dictionary [14], or web access [5]. In our work, we propose an interaction between the ontologist and the system, our external resource lies both in the texts and the ontologist. This paper first presents, in section 2, the big picture of the Dynamo system. In particular the motives that led to its creation and its general architecture. Then, in section 3 we discuss the distributed clustering algorithm used in Dynamo and compare it to a more classic centralized approach. Section 4 is dedicated to some enhancement of the agents behavior that got designed by taking into account criteria ignored by clustering. And finally, in section 5, we discuss the limitations of our approach and explain how it will be addressed in further work. 2. DYNAMO OVERVIEW 2.1 Ontology as a Multi-Agent System Dynamo aims at reducing the need for manual actions in processing the text analysis results and at suggesting a concept network kick-off in order to build ontologies more efficiently. The chosen approach is completely original to our knowledge and uses an adaptive multi-agent system. This choice comes from the qualities offered by multi-agent system: they can ease the interactive design of a system [8] (in our case, a conceptual network), they allow its incremental building by progressively taking into account new data (coming from text analysis and user interaction), and last but not least they can be easily distributed across a computer network. Dynamo takes a syntactical and terminological analysis of texts as input. It uses several criteria based on statistics computed from the linguistic contexts of terms to create and position the concepts. As output, Dynamo provides to the analyst a hierarchical organization of concepts (the multi-agent system itself) that can be validated, refined of modified, until he/she obtains a satisfying state of 1286 978-81-904262-7-5 (RPS) c 2007 IFAAMAS the semantic network. An ontology can be seen as a stable map constituted of conceptual entities, represented here by agents, linked by labelled relations. Thus, our approach considers an ontology as a type of equilibrium between its concept-agents where their forces are defined by their potential relationships. The ontology modification is a perturbation of the previous equilibrium by the appearance or disappearance of agents or relationships. In this way, a dynamic ontology is a self-organizing process occurring when new texts are included into the corpus, or when the ontologist interacts with it. To support the needed flexibility of such a system we use a selforganizing multi-agent system based on a cooperative approach [9]. We followed the ADELFE method [4] proposed to drive the design of this kind of multi-agent system. It justifies how we designed some of the rules used by our agents in order to maximize the cooperation degree within Dynamo``s multi-agent system. 2.2 Proposed Architecture In this section, we present our system architecture. It addresses the needs of Knowledge Engineering in the context of dynamic ontology management and maintenance when the ontology is linked to a document collection. The Dynamo system consists of three parts (cf. figure 1): • a term network, obtained thanks to a term extraction tool used to preprocess the textual corpus, • a multi-agent system which uses the term network to make a hierarchical clustering in order to obtain a taxonomy of concepts, • an interface allowing the ontologist to visualize and control the clustering process. ?? Ontologist Interface System Concept Agent Term Term network Terms Extraction Tool Figure 1: System architecture The term extractor we use is Syntex, a software that has efficiently been used for ontology building tasks [11]. We mainly selected it because of its robustness and the great amount of information extracted. In particular, it creates a ``Head-Expansion'' network which has already proven to be interesting for a clustering system [1]. In such a network, each term is linked to its head term1 and 1 i.e. the maximum sub-phrase located as head of the term its expansion term2 , and also to all the terms for which it is a head or an expansion term. For example, ``knowledge engineering from text'' has ``knowledge engineering'' as head term and ``text'' as expansion term. Moreover, ``knowledge engineering'' is composed of ``knowledge'' as head term and ``engineering'' as expansion term. With Dynamo, the term network obtained as the output of the extractor is stored in a database. For each term pair, we assume that it is possible to compute a similarity value in order to make a clustering [6] [1]. Because of the nature of the data, we are only focusing on similarity computation between objects described thanks to binary variables, that means that each item is described by the presence or absence of a characteristic set [15]. In the case of terms we are generally dealing with their usage contexts. With Syntex, those contexts are identified by terms and characterized by some syntactic relations. The Dynamo multi-agent system implements the distributed clustering algorithm described in detail in section 3 and the rules described in section 4. It is designed to be both the system producing the resulting structure and the structure itself. It means that each agent represent a class in the taxonomy. Then, the system output is the organization obtained from the interaction between agents, while taking into account feedback coming from the ontologist when he/she modifies the taxonomy given his needs or expertise. 3. DISTRIBUTED CLUSTERING This section presents the distributed clustering algorithm used in Dynamo. For the sake of understanding, and because of its evaluation in section 3.1, we recall the basic centralized algorithm used for a hierarchical ascending clustering in a non metric space, when a symmetrical similarity measure is available [15] (which is the case of the measures used in our system). Algorithm 1: Centralized hierarchical ascending clustering algorithm Data: List L of items to organize as a hierarchy Result: Root R of the hierarchy while length(L) > 1 do max ← 0; A ← nil; B ← nil; for i ← 1 to length(L) do I ← L[i]; for j ← i + 1 to length(L) do J ← L[j]; sim ← similarity(I, J); if sim > max then max ← sim; A ← I; B ← J; end end end remove(A, L); remove(B, L); append((A, B), L); end R ← L[1]; In algorithm 1, for each clustering step, the pair of the most similar elements is determined. Those two elements are grouped in a cluster, and the resulting class is appended to the list of remaining elements. This algorithm stops when the list has only one element left. 2 i.e. the maximum sub-phrase located as tail of the term The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1287 The hierarchy resulting from algorithm 1 is always a binary tree because of the way grouping is done. Moreover grouping the most similar elements is equivalent to moving them away from the least similar ones. Our distributed algorithm is designed relying on those two facts. It is executed concurrently in each of the agents of the system. Note that, in the following of this paper, we used for both algorithms an Anderberg similarity (with α = 0.75) and an average link clustering strategy [15]. Those choices have an impact on the resulting tree, but they impact neither the global execution of the algorithm nor its complexity. We now present the distributed algorithm used in our system. It is bootstrapped in the following way: • a TOP agent having no parent is created, it will be the root of the resulting taxonomy, • an agent is created for each term to be positioned in the taxonomy, they all have TOP as parent. Once this basic structure is set, the algorithm runs until it reaches equilibrium and then provides the resulting taxonomy. Ak−1 Ak AnA2A1 P .... .... A1 Figure 2: Distributed classification: Step 1 The process first step (figure 2) is triggered when an agent (here Ak) has more than one brother (since we want to obtain a binary tree). Then it sends a message to its parent P indicating its most dissimilar brother (here A1). Then P receives the same kind of message from each of its children. In the following, this kind of message will be called a ``vote''. Ak−1 Ak AnA2A1 P P'' .... .... P'' P'' Figure 3: Distributed clustering: Step 2 Next, when P has got messages from all its children, it starts the second step (figure 3). Thanks to the received messages indicating the preferences of its children, P can determine three sub-groups among its children: • the child which got the most ``votes'' by its brothers, that is the child being the most dissimilar from the greatest number of its brothers. In case of a draw, one of the winners is chosen randomly (here A1), • the children that allowed the ``election'' of the first group, that is the agents which chose their brother of the first group as being the most dissimilar one (here Ak to An), • the remaining children (here A2 to Ak−1). Then P creates a new agent P (having P as parent) and asks agents from the second group (here agents Ak to An) to make it their new parent. Ak−1 Ak AnA2A1 P P'' .... .... Figure 4: Distributed clustering: Step 3 Finally, step 3 (figure 4) is trivial. The children rejected by P (here agent A2 to An) take its message into account and choose P as their new parent. The hierarchy just created a new intermediate level. Note that this algorithm generally converges, since the number of brothers of an agent drops. When an agent has only one remaining brother, its activity stops (although it keeps processing messages coming from its children). However in a few cases we can reach a ``circular conflict'' in the voting procedure when for example A votes against B, B against C and C against A. With the current system no decision can be taken. The current procedure should be improved to address this, probably using a ranked voting method. 3.1 Quantitative Evaluation Now, we evaluate the properties of our distributed algorithm. It requires to begin with a quantitative evaluation, based on its complexity, while comparing it with the algorithm 1 from the previous section. Its theoretical complexity is calculated for the worst case, by considering the similarity computation operation as elementary. For the distributed algorithm, the worst case means that for each run, only a two-item group can be created. Under those conditions, for a given dataset of n items, we can determine the amount of similarity computations. For algorithm 1, we note l = length(L), then the most enclosed ``for'' loop is run l − i times. And its body has the only similarity computation, so its cost is l−i. The second ``for'' loop is ran l times for i ranging from 1 to l. Then its cost is Pl i=1(l − i) which can be simplified in l×(l−1) 2 . Finally for each run of the ``while'' loop, l is decreased from n to 1 which gives us t1(n) as the amount of similarity computations for algorithm 1: t1(n) = nX l=1 l × (l − 1) 2 (1) For the distributed algorithm, at a given step, each one of the l agents evaluates the similarity with its l −1 brothers. So each steps has a l × (l − 1) cost. Then, groups are created and another vote occurs with l decreased by one (since we assume worst case, only groups of size 2 or l −1 are built). Since l is equal to n on first run, we obtain tdist(n) as the amount of similarity computations for the distributed algorithm: tdist(n) = nX l=1 l × (l − 1) (2) Both algorithms then have an O(n3 ) complexity. But in the worst case, the distributed algorithm does twice the number of el1288 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) ementary operations done by the centralized algorithm. This gap comes from the local decision making in each agent. Because of this, the similarity computations are done twice for each agent pair. We could conceive that an agent sends its computation result to its peer. But, it would simply move the problem by generating more communication in the system. 0 20000 40000 60000 80000 100000 120000 140000 160000 180000 10 20 30 40 50 60 70 80 90 100 Amountofcomparisons Amount of input terms 1. Distributed algorithm (on average, with min and max) 2. Logarithmic polynomial 3. Centralized algorithm Figure 5: Experimental results In a second step, the average complexity of the algorithm has been determined by experiments. The multi-agent system has been executed with randomly generated input data sets ranging from ten to one hundred terms. The given value is the average of comparisons made for one hundred of runs without any user interaction. It results in the plots of figure 5. The algorithm is then more efficient on average than the centralized algorithm, and its average complexity is below the worst case. It can be explained by the low probability that a data set forces the system to create only minimal groups (two items) or maximal (n − 1 elements) for each step of reasoning. Curve number 2 represents the logarithmic polynomial minimizing the error with curve number 1. The highest degree term of this polynomial is in n2 log(n), then our distributed algorithm has a O(n2 log(n)) complexity on average. Finally, let``s note the reduced variation of the average performances with the maximum and the minimum. In the worst case for 100 terms, the variation is of 1,960.75 for an average of 40,550.10 (around 5%) which shows the good stability of the system. 3.2 Qualitative Evaluation Although the quantitative results are interesting, the real advantage of this approach comes from more qualitative characteristics that we will present in this section. All are advantages obtained thanks to the use of an adaptive multi-agent system. The main advantage to the use of a multi-agent system for a clustering task is to introduce dynamic in such a system. The ontologist can make modifications and the hierarchy adapts depending on the request. It is particularly interesting in a knowledge engineering context. Indeed, the hierarchy created by the system is meant to be modified by the ontologist since it is the result of a statistic computation. During the necessary look at the texts to examine the usage contexts of terms [2], the ontologist will be able to interpret the real content and to revise the system proposal. It is extremely difficult to realize this with a centralized ``black-box'' approach. In most cases, one has to find which reasoning step generated the error and to manually modify the resulting class. Unfortunately, in this case, all the reasoning steps that occurred after the creation of the modified class are lost and must be recalculated by taking the modification into account. That is why a system like ASIUM [6] tries to soften the problem with a system-user collaboration by showing to the ontologist the created classes after each step of reasoning. But, the ontologist can make a mistake, and become aware of it too late. Figure 6: Concept agent tree after autonomous stabilization of the system In order to illustrate our claims, we present an example thanks to a few screenshots from the working prototype tested on a medical related corpus. By using test data and letting the system work by itself, we obtain the hierarchy from figure 6 after stabilization. It is clear that the concept described by the term ``lésion'' (lesion) is misplaced. It happens that the similarity computations place it closer to ``femme'' (woman) and ``chirurgien'' (surgeon) than to ``infection'', ``gastro-entérite'' (gastro-enteritis) and ``hépatite'' (hepatitis). This wrong position for ``lesion'' is explained by the fact that without ontologist input the reasoning is only done on statistics criteria. Figure 7: Concept agent tree after ontologist modification Then, the ontologist replaces the concept in the right branch, by affecting ``ConceptAgent:8'' as its new parent. The name ``ConceptAgent:X'' is automatically given to a concept agent that is not described by a term. The system reacts by itself and refines the clustering hierarchy to obtain a binary tree by creating ``ConceptAgent:11''. The new stable state if the one of figure 7. This system-user coupling is necessary to build an ontology, but no particular adjustment to the distributed algorithm principle is needed since each agent does an autonomous local processing and communicates with its neighborhood by messages. Moreover, this algorithm can de facto be distributed on a computer network. The communication between agents is then done by sending messages and each one keeps its decision autonomy. Then, a system modification to make it run networked would not require to adjust the algorithm. On the contrary, it would only require to rework the communication layer and the agent creation process since in our current implementation those are not networked. 4. MULTI-CRITERIA HIERARCHY In the previous sections, we assumed that similarity can be computed for any term pair. But, as soon as one uses real data this property is not verified anymore. Some terms do not have any similarity value with any extracted term. Moreover for leaf nodes it is sometimes interesting to use other means to position them in the hierarchy. For this low level structuring, ontologists generally base The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1289 their choices on simple heuristics. Using this observation, we built a new set of rules, which are not based on similarity to support low level structuring. 4.1 Adding Head Coverage Rules In this case, agents can act with a very local point of view simply by looking at the parent/child relation. Each agent can try to determine if its parent is adequate. It is possible to guess this because each concept agent is described by a set of terms and thanks to the ``Head-Expansion'' term network. In the following TX will be the set of terms describing concept agent X and head(TX ) the set of all the terms that are head of at least one element of TX . Thanks to those two notations we can describe the parent adequacy function a(P, C) between a parent P and a child C: a(P, C) = |TP ∩ head(TC )| |TP ∪ head(TC )| (3) Then, the best parent for C is the P agent that maximizes a(P, C). An agent unsatisfied by its parent can then try to find a better one by evaluating adequacy with candidates. We designed a complementary algorithm to drive this search: When an agent C is unsatisfied by its parent P, it evaluates a(Bi, C) with all its brothers (noted Bi) the one maximizing a(Bi, C) is then chosen as the new parent. Figure 8: Concept agent tree after autonomous stabilization of the system without head coverage rule We now illustrate this rule behavior with an example. Figure 8 shows the state of the system after stabilization on test data. We can notice that ``hépatite viral'' (viral hepatitis) is still linked to the taxonomy root. It is caused by the fact that there is no similarity value between the ``viral hepatitis'' term and any of the term of the other concept agents. Figure 9: Concept agent tree after activation of the head coverage rule After activating the head coverage rule and letting the system stabilize again we obtain figure 9. We can see that ``viral hepatitis'' slipped through the branch leading to ``hepatitis'' and chose it as its new parent. It is a sensible default choice since ``viral hepatitis'' is a more specific term than ``hepatitis''. This rule tends to push agents described by a set of term to become leafs of the concept tree. It addresses our concern to improve the low level structuring of our taxonomy. But obviously our agents lack a way to backtrack in case of modifications in the taxonomy which would make them be located in the wrong branch. That is one of the point where our system still has to be improved by adding another set of rules. 4.2 On Using Several Criteria In the previous sections and examples, we only used one algorithm at a time. The distributed clustering algorithm tends to introduce new layers in the taxonomy, while the head coverage algorithm tends to push some of the agents toward the leafs of the taxonomy. It obviously raises the question on how to deal with multiple criteria in our taxonomy building, and how agents determine their priorities at a given time. The solution we chose came from the search for minimizing non cooperation within the system in accordance with the ADELFE method. Each agent computes three non cooperation degrees and chooses its current priority depending on which degree is the highest. For a given agent A having a parent P, a set of brothers Bi and which received a set of messages Mk having the priority pk the three non cooperation degrees are: • μH (A) = 1 − a(P, A), is the ``head coverage'' non cooperation degree, determined by the head coverage of the parent, • μB(A) = max(1 − similarity(A, Bi)), is the ``brotherhood'' non cooperation degree, determined by the worst brother of A regarding similarities, • μM (A) = max(pk), is the ``message'' non cooperation degree, determined by the most urgent message received. Then, the non cooperation degree μ(A) of agent A is: μ(A) = max(μH (A), μB(A), μM (A)) (4) Then, we have three cases determining which kind of action A will choose: • if μ(A) = μH (A) then A will use the head coverage algorithm we detailed in the previous subsection • if μ(A) = μB(A) then A will use the distributed clustering algorithm (see section 3) • if μ(A) = μM (A) then A will process Mk immediately in order to help its sender Those three cases summarize the current activities of our agents: they have to find the best parent for them (μ(A) = μH (A)), improve the structuring through clustering (μ(A) = μB(A)) and process other agent messages (μ(A) = μM (A)) in order to help them fulfill their own goals. 4.3 Experimental Complexity Revisited We evaluated the experimental complexity of the whole multiagent system when all the rules are activated. In this case, the metric used is the number of messages exchanged in the system. Once again the system has been executed with input data sets ranging from ten to one hundred terms. The given value is the average of message amount sent in the system as a whole for one hundred runs without user interaction. It results in the plots of figure 10. Curve number 1 represents the average of the value obtained. Curve number 2 represents the average of the value obtained when only the distributed clustering algorithm is activated, not the full rule set. Curve number 3 represents the polynomial minimizing the error with curve number 1. The highest degree term of this polynomial is in n3 , then our multi-agent system has a O(n3 ) complexity 1290 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 5000 10000 15000 20000 25000 10 20 30 40 50 60 70 80 90 100 Amountofmessages Amount of input terms 1. Dynamo, all rules (on average, with min and max) 2. Distributed clustering only (on average) 2. Cubic polynomial Figure 10: Experimental results on average. Moreover, let``s note the very small variation of the average performances with the maximum and the minimum. In the worst case for 100 terms, the variation is of 126.73 for an average of 20,737.03 (around 0.6%) which proves the excellent stability of the system. Finally the extra head coverage rules are a real improvement on the distributed algorithm alone. They introduce more constraints and stability point is reached with less interactions and decision making by the agents. It means that less messages are exchanged in the system while obtaining a tree of higher quality for the ontologist. 5. DISCUSSION & PERSPECTIVES 5.1 Current Limitation of our Approach The most important limitation of our current algorithm is that the result depends on the order the data gets added. When the system works by itself on a fixed data set given during initialization, the final result is equivalent to what we could obtain with a centralized algorithm. On the contrary, adding a new item after a first stabilization has an impact on the final result. Figure 11: Concept agent tree after autonomous stabilization of the system To illustrate our claims, we present another example of the working system. By using test data and letting the system work by itself, we obtain the hierarchy of figure 11 after stabilization. Figure 12: Concept agent tree after taking in account ``hepatitis'' Then, the ontologist interacts with the system and adds a new concept described by the term ``hepatitis'' and linked to the root. The system reacts and stabilizes, we then obtain figure 12 as a result. ``hepatitis'' is located in the right branch, but we have not obtained the same organization as the figure 6 of the previous example. We need to improve our distributed algorithm to allow a concept to move along a branch. We are currently working on the required rules, but the comparison with centralized algorithm will become very difficult. In particular since they will take into account criteria ignored by the centralized algorithm. 5.2 Pruning for Ontologies Building In section 3, we presented the distributed clustering algorithm used in the Dynamo system. Since this work was first based on this algorithm, it introduced a clear bias toward binary trees as a result. But we have to keep in mind that we are trying to obtain taxonomies which are more refined and concise. Although the head coverage rule is an improvement because it is based on how the ontologists generally work, it only addresses low level structuring but not the intermediate levels of the tree. By looking at figure 7, it is clear that some pruning could be done in the taxonomy. In particular, since ``lésion'' moved, ``ConceptAgent:9'' could be removed, it is not needed anymore. Moreover the branch starting with ``ConceptAgent:8'' clearly respects the constraint to make a binary tree, but it would be more useful to the user in a more compact and meaningful form. In this case ``ConceptAgent:10'' and ``ConceptAgent:11'' could probably be merged. Currently, our system has the necessary rules to create intermediate levels in the taxonomy, or to have concepts shifting towards the leaf. As we pointed, it is not enough, so new rules are needed to allow removing nodes from the tree, or move them toward the root. Most of the work needed to develop those rules consists in finding the relevant statistic information that will support the ontologist. 6. CONCLUSION After being presented as a promising solution, ensuring model quality and their terminological richness, ontology building from textual corpus analysis is difficult and costly. It requires analyst supervising and taking in account the ontology aim. Using natural languages processing tools ease the knowledge localization in texts through language uses. That said, those tools produce a huge amount of lexical or grammatical data which is not trivial to examine in order to define conceptual elements. Our contribution lies in this step of the modeling process from texts, before any attempts to normalize or formalize the result. We proposed an approach based on an adaptive multi-agent system to provide the ontologist with a first taxonomic structure of concepts. Our system makes use of a terminological network resulting from an analysis made by Syntex. The current state of our software allows to produce simple structures, to propose them to the ontologist and to make them evolve depending on the modifications he made. Performances of the system are interesting and some aspects are even comparable to their centralized counterpart. Its strengths are mostly qualitative since it allows more subtle user interactions and a progressive adaptation to new linguistic based information. From the point of view of ontology building, this work is a first step showing the relevance of our approach. It must continue, both to ensure a better robustness during classification, and to obtain richer structures semantic wise than simple trees. From this improvements we are mostly focusing on the pruning to obtain better taxonomies. We``re currently working on the criterion to trigger the complementary actions of the structure changes applied by our clustering algorithm. In other words this algorithm introduces inThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291 termediate levels, and we need to be able to remove them if necessary, in order to reach a dynamic equilibrium. Also from the multi-agent engineering point of view, their use in a dynamic ontology context has shown its relevance. This dynamic ontologies can be seen as complex problem solving, in such a case self-organization through cooperation has been an efficient solution. And, more generally it``s likely to be interesting for other design related tasks, even if we``re focusing only on knowledge engineering in this paper. Of course, our system still requires more evaluation and validation work to accurately determine the advantages and flaws of this approach. We``re planning to work on such benchmarking in the near future. 7. REFERENCES [1] H. Assadi. Construction of a regional ontology from text and its use within a documentary system. Proceedings of the International Conference on Formal Ontology and Information Systems - FOIS``98, pages 236-249, 1998. [2] N. Aussenac-Gilles and D. Sörgel. Text analysis for ontology and terminology engineering. Journal of Applied Ontology, 2005. [3] J. Bao and V. Honavar. Collaborative ontology building with wiki@nt. Proceedings of the Workshop on Evaluation of Ontology-Based Tools (EON2004), 2004. [4] C. Bernon, V. Camps, M.-P. Gleizes, and G. Picard. Agent-Oriented Methodologies, chapter 7. Engineering Self-Adaptive Multi-Agent Systems : the ADELFE Methodology, pages 172-202. Idea Group Publishing, 2005. [5] C. Brewster, F. Ciravegna, and Y. Wilks. Background and foreground knowledge in dynamic ontology construction. Semantic Web Workshop, SIGIR``03, August 2003. [6] D. Faure and C. Nedellec. A corpus-based conceptual clustering method for verb frames and ontology acquisition. LREC workshop on adapting lexical and corpus resources to sublanguages and applications, 1998. [7] F. Gandon. Ontology Engineering: a Survey and a Return on Experience. INRIA, 2002. [8] J.-P. Georgé, G. Picard, M.-P. Gleizes, and P. Glize. Living Design for Open Computational Systems. 12th IEEE International Workshops on Enabling Technologies, Infrastructure for Collaborative Enterprises, pages 389-394, June 2003. [9] M.-P. Gleizes, V. Camps, and P. Glize. A Theory of emergent computation based on cooperative self-organization for adaptive artificial systems. Fourth European Congress of Systems Science, September 1999. [10] J. Heflin and J. Hendler. Dynamic ontologies on the web. American Association for Artificial Intelligence Conference, 2000. [11] S. Le Moigno, J. Charlet, D. Bourigault, and M.-C. Jaulent. Terminology extraction from text to build an ontology in surgical intensive care. Proceedings of the AMIA 2002 annual symposium, 2002. [12] K. Lister, L. Sterling, and K. Taveter. Reconciling Ontological Differences by Assistant Agents. AAMAS``06, May 2006. [13] A. Maedche. Ontology learning for the Semantic Web. Kluwer Academic Publisher, 2002. [14] A. Maedche and S. Staab. Mining Ontologies from Text. EKAW 2000, pages 189-202, 2000. [15] C. D. Manning and H. Schütze. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, Massachusetts, 1999. [16] H. V. D. Parunak, R. Rohwer, T. C. Belding, and S. Brueckner. Dynamic decentralized any-time hierarchical clustering. 29th Annual International ACM SIGIR Conference on Research & Development on Information Retrieval, August 2006. 1292 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
A Multi-Agent System for Building Dynamic Ontologies ABSTRACT Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning. Our system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture. In this paper we present a distributed hierarchical clustering algorithm, core of our approach. It is evaluated and compared to a more conventional centralized algorithm. We also present how it has been improved using a multi-criteria approach. With those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution. 1. INTRODUCTION Nowadays, it is well established that ontologies are needed for semantic web, knowledge management, B2B...For knowledge management, ontologies are used to annotate documents and to enhance the information retrieval. But building an ontology manually is a slow, tedious, costly, complex and time consuming process. Currently, a real challenge lies in building them automatically or semi-automatically and keeping them up to date. It would mean creating dynamic ontologies [10] and it justifies the emergence of ontology learning techniques [14] [13]. Our research focuses on Dynamo (an acronym of DYNAMic Ontologies), a tool based on an adaptive multi-agent system to construct and maintain an ontology from a domain specific set of texts. * PhD student Our aim is not to build an exhaustive, general hierarchical ontology but a domain specific one. We propose a semi-automated tool since an external resource is required: the "ontologist". An ontologist is a kind of cognitive engineer, or analyst, who is using information from texts and expert interviews to design ontologies. In the multi-agent field, ontologies generally enable agents to understand each other [12]. They're sometimes used to ease the ontology building process, in particular for collaborative contexts [3], but they rarely represent the ontology itself [16]. Most works interested in the construction of ontologies [7] propose the refinement of ontologies. This process consists in using an existing ontology and building a new one from it. This approach is different from our approach because Dynamo starts from scratch. Researchers, working on the construction of ontologies from texts, claim that the work to be automated requires external resources such as a dictionary [14], or web access [5]. In our work, we propose an interaction between the ontologist and the system, our external resource lies both in the texts and the ontologist. This paper first presents, in section 2, the big picture of the Dynamo system. In particular the motives that led to its creation and its general architecture. Then, in section 3 we discuss the distributed clustering algorithm used in Dynamo and compare it to a more classic centralized approach. Section 4 is dedicated to some enhancement of the agents behavior that got designed by taking into account criteria ignored by clustering. And finally, in section 5, we discuss the limitations of our approach and explain how it will be addressed in further work. 2. DYNAMO OVERVIEW 2.1 Ontology as a Multi-Agent System Dynamo aims at reducing the need for manual actions in processing the text analysis results and at suggesting a concept network kick-off in order to build ontologies more efficiently. The chosen approach is completely original to our knowledge and uses an adaptive multi-agent system. This choice comes from the qualities offered by multi-agent system: they can ease the interactive design of a system [8] (in our case, a conceptual network), they allow its incremental building by progressively taking into account new data (coming from text analysis and user interaction), and last but not least they can be easily distributed across a computer network. Dynamo takes a syntactical and terminological analysis of texts as input. It uses several criteria based on statistics computed from the linguistic contexts of terms to create and position the concepts. As output, Dynamo provides to the analyst a hierarchical organization of concepts (the multi-agent system itself) that can be validated, refined of modified, until he/she obtains a satisfying state of 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS the semantic network. An ontology can be seen as a stable map constituted of conceptual entities, represented here by agents, linked by labelled relations. Thus, our approach considers an ontology as a type of equilibrium between its concept-agents where their forces are defined by their potential relationships. The ontology modification is a perturbation of the previous equilibrium by the appearance or disappearance of agents or relationships. In this way, a dynamic ontology is a self-organizing process occurring when new texts are included into the corpus, or when the ontologist interacts with it. To support the needed flexibility of such a system we use a selforganizing multi-agent system based on a cooperative approach [9]. We followed the ADELFE method [4] proposed to drive the design of this kind of multi-agent system. It justifies how we designed some of the rules used by our agents in order to maximize the cooperation degree within Dynamo's multi-agent system. 2.2 Proposed Architecture In this section, we present our system architecture. It addresses the needs of Knowledge Engineering in the context of dynamic ontology management and maintenance when the ontology is linked to a document collection. The Dynamo system consists of three parts (cf. figure 1): • a term network, obtained thanks to a term extraction tool used to preprocess the textual corpus, • a multi-agent system which uses the term network to make a hierarchical clustering in order to obtain a taxonomy of concepts, • an interface allowing the ontologist to visualize and control the clustering process. The term extractor we use is Syntex, a software that has efficiently been used for ontology building tasks [11]. We mainly selected it because of its robustness and the great amount of information extracted. In particular, it creates a "Head-Expansion" network which has already proven to be interesting for a clustering system [1]. In such a network, each term is linked to its head term1 and 1i. e. the maximum sub-phrase located as head of the term its expansion term2, and also to all the terms for which it is a head or an expansion term. For example, "knowledge engineering from text" has "knowledge engineering" as head term and "text" as expansion term. Moreover, "knowledge engineering" is composed of "knowledge" as head term and "engineering" as expansion term. With Dynamo, the term network obtained as the output of the extractor is stored in a database. For each term pair, we assume that it is possible to compute a similarity value in order to make a clustering [6] [1]. Because of the nature of the data, we are only focusing on similarity computation between objects described thanks to binary variables, that means that each item is described by the presence or absence of a characteristic set [15]. In the case of terms we are generally dealing with their usage contexts. With Syntex, those contexts are identified by terms and characterized by some syntactic relations. The Dynamo multi-agent system implements the distributed clustering algorithm described in detail in section 3 and the rules described in section 4. It is designed to be both the system producing the resulting structure and the structure itself. It means that each agent represent a class in the taxonomy. Then, the system output is the organization obtained from the interaction between agents, while taking into account feedback coming from the ontologist when he/she modifies the taxonomy given his needs or expertise. 3. DISTRIBUTED CLUSTERING This section presents the distributed clustering algorithm used in Dynamo. For the sake of understanding, and because of its evaluation in section 3.1, we recall the basic centralized algorithm used for a hierarchical ascending clustering in a non metric space, when a symmetrical similarity measure is available [15] (which is the case of the measures used in our system). Algorithm 1: Centralized hierarchical ascending clustering algorithm Data: List L of items to organize as a hierarchy Result: Root R of the hierarchy In algorithm 1, for each clustering step, the pair of the most similar elements is determined. Those two elements are grouped in a cluster, and the resulting class is appended to the list of remaining elements. This algorithm stops when the list has only one element left. 2i. e. the maximum sub-phrase located as tail of the term Figure 1: System architecture The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1287 The hierarchy resulting from algorithm 1 is always a binary tree because of the way grouping is done. Moreover grouping the most similar elements is equivalent to moving them away from the least similar ones. Our distributed algorithm is designed relying on those two facts. It is executed concurrently in each of the agents of the system. Note that, in the following of this paper, we used for both algorithms an Anderberg similarity (with α = 0.75) and an average link clustering strategy [15]. Those choices have an impact on the resulting tree, but they impact neither the global execution of the algorithm nor its complexity. We now present the distributed algorithm used in our system. It is bootstrapped in the following way: • a TOP agent having no parent is created, it will be the root of the resulting taxonomy, • an agent is created for each term to be positioned in the taxonomy, they all have TOP as parent. Once this basic structure is set, the algorithm runs until it reaches equilibrium and then provides the resulting taxonomy. Figure 2: Distributed classification: Step 1 The process first step (figure 2) is triggered when an agent (here Ak) has more than one brother (since we want to obtain a binary tree). Then it sends a message to its parent P indicating its most dissimilar brother (here A1). Then P receives the same kind of message from each of its children. In the following, this kind of message will be called a "vote". Figure 3: Distributed clustering: Step 2 Next, when P has got messages from all its children, it starts the second step (figure 3). Thanks to the received messages indicating the preferences of its children, P can determine three sub-groups among its children: • the child which got the most "votes" by its brothers, that is the child being the most dissimilar from the greatest number of its brothers. In case of a draw, one of the winners is chosen randomly (here A1), • the children that allowed the "election" of the first group, that is the agents which chose their brother of the first group as being the most dissimilar one (here Ak to An), • the remaining children (here A2 to Ak − 1). Then P creates a new agent P' (having P as parent) and asks agents from the second group (here agents Ak to An) to make it their new parent. Figure 4: Distributed clustering: Step 3 Finally, step 3 (figure 4) is trivial. The children rejected by P (here agent A2 to An) take its message into account and choose P' as their new parent. The hierarchy just created a new intermediate level. Note that this algorithm generally converges, since the number of brothers of an agent drops. When an agent has only one remaining brother, its activity stops (although it keeps processing messages coming from its children). However in a few cases we can reach a "circular conflict" in the voting procedure when for example A votes against B, B against C and C against A. With the current system no decision can be taken. The current procedure should be improved to address this, probably using a ranked voting method. 3.1 Quantitative Evaluation Now, we evaluate the properties of our distributed algorithm. It requires to begin with a quantitative evaluation, based on its complexity, while comparing it with the algorithm 1 from the previous section. Its theoretical complexity is calculated for the worst case, by considering the similarity computation operation as elementary. For the distributed algorithm, the worst case means that for each run, only a two-item group can be created. Under those conditions, for a given dataset of n items, we can determine the amount of similarity computations. For algorithm 1, we note l = length (L), then the most enclosed "for" loop is run l − i times. And its body has the only similarity computation, so its cost is l − i. The second "for" loop is ran l times for i ranging from 1 to l. Then its cost is Pli = 1 (l − i) which can be simplified in l × (l − 1) 2. Finally for each run of the "while" loop, l is decreased from n to 1 which gives us t1 (n) as the amount of similarity computations for algorithm 1: For the distributed algorithm, at a given step, each one of the l agents evaluates the similarity with its l − 1 brothers. So each steps has a l × (l − 1) cost. Then, groups are created and another vote occurs with l decreased by one (since we assume worst case, only groups of size 2 or l − 1 are built). Since l is equal to n on first run, we obtain tdist (n) as the amount of similarity computations for the distributed algorithm: Both algorithms then have an O (n3) complexity. But in the worst case, the distributed algorithm does twice the number of el 1288 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) ementary operations done by the centralized algorithm. This gap comes from the local decision making in each agent. Because of this, the similarity computations are done twice for each agent pair. We could conceive that an agent sends its computation result to its peer. But, it would simply move the problem by generating more communication in the system. Figure 5: Experimental results In a second step, the average complexity of the algorithm has been determined by experiments. The multi-agent system has been executed with randomly generated input data sets ranging from ten to one hundred terms. The given value is the average of comparisons made for one hundred of runs without any user interaction. It results in the plots of figure 5. The algorithm is then more efficient on average than the centralized algorithm, and its average complexity is below the worst case. It can be explained by the low probability that a data set forces the system to create only minimal groups (two items) or maximal (n − 1 elements) for each step of reasoning. Curve number 2 represents the logarithmic polynomial minimizing the error with curve number 1. The highest degree term of this polynomial is in n2log (n), then our distributed algorithm has a O (n2log (n)) complexity on average. Finally, let's note the reduced variation of the average performances with the maximum and the minimum. In the worst case for 100 terms, the variation is of 1,960.75 for an average of 40,550.10 (around 5%) which shows the good stability of the system. 3.2 Qualitative Evaluation Although the quantitative results are interesting, the real advantage of this approach comes from more qualitative characteristics that we will present in this section. All are advantages obtained thanks to the use of an adaptive multi-agent system. The main advantage to the use of a multi-agent system for a clustering task is to introduce dynamic in such a system. The ontologist can make modifications and the hierarchy adapts depending on the request. It is particularly interesting in a knowledge engineering context. Indeed, the hierarchy created by the system is meant to be modified by the ontologist since it is the result of a statistic computation. During the necessary look at the texts to examine the usage contexts of terms [2], the ontologist will be able to interpret the real content and to revise the system proposal. It is extremely difficult to realize this with a centralized "black-box" approach. In most cases, one has to find which reasoning step generated the error and to manually modify the resulting class. Unfortunately, in this case, all the reasoning steps that occurred after the creation of the modified class are lost and must be recalculated by taking the modification into account. That is why a system like ASIUM [6] tries to soften the problem with a system-user collaboration by showing to the ontologist the created classes after each step of reasoning. But, the ontologist can make a mistake, and become aware of it too late. Figure 6: Concept agent tree after autonomous stabilization of the system In order to illustrate our claims, we present an example thanks to a few screenshots from the working prototype tested on a medical related corpus. By using test data and letting the system work by itself, we obtain the hierarchy from figure 6 after stabilization. It is clear that the concept described by the term "lésion" (lesion) is misplaced. It happens that the similarity computations place it closer to "femme" (woman) and "chirurgien" (surgeon) than to "infection", "gastro-entérite" (gastro-enteritis) and "hépatite" (hepatitis). This wrong position for "lesion" is explained by the fact that without ontologist input the reasoning is only done on statistics criteria. Figure 7: Concept agent tree after ontologist modification Then, the ontologist replaces the concept in the right branch, by affecting "ConceptAgent:8" as its new parent. The name "ConceptAgent: X" is automatically given to a concept agent that is not described by a term. The system reacts by itself and refines the clustering hierarchy to obtain a binary tree by creating "ConceptAgent:11". The new stable state if the one of figure 7. This system-user coupling is necessary to build an ontology, but no particular adjustment to the distributed algorithm principle is needed since each agent does an autonomous local processing and communicates with its neighborhood by messages. Moreover, this algorithm can de facto be distributed on a computer network. The communication between agents is then done by sending messages and each one keeps its decision autonomy. Then, a system modification to make it run networked would not require to adjust the algorithm. On the contrary, it would only require to rework the communication layer and the agent creation process since in our current implementation those are not networked. 4. MULTI-CRITERIA HIERARCHY In the previous sections, we assumed that similarity can be computed for any term pair. But, as soon as one uses real data this property is not verified anymore. Some terms do not have any similarity value with any extracted term. Moreover for leaf nodes it is sometimes interesting to use other means to position them in the hierarchy. For this low level structuring, ontologists generally base The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1289 their choices on simple heuristics. Using this observation, we built a new set of rules, which are not based on similarity to support low level structuring. 4.1 Adding Head Coverage Rules In this case, agents can act with a very local point of view simply by looking at the parent/child relation. Each agent can try to determine if its parent is adequate. It is possible to guess this because each concept agent is described by a set of terms and thanks to the "Head-Expansion" term network. In the following TX will be the set of terms describing concept agent X and head (TX) the set of all the terms that are head of at least one element of TX. Thanks to those two notations we can describe the parent adequacy function a (P, C) between a parent P and a child C: Then, the best parent for C is the P agent that maximizes a (P, C). An agent unsatisfied by its parent can then try to find a better one by evaluating adequacy with candidates. We designed a complementary algorithm to drive this search: When an agent C is unsatisfied by its parent P, it evaluates a (Bi, C) with all its brothers (noted Bi) the one maximizing a (Bi, C) is then chosen as the new parent. Figure 8: Concept agent tree after autonomous stabilization of the system without head coverage rule We now illustrate this rule behavior with an example. Figure 8 shows the state of the system after stabilization on test data. We can notice that "hépatite viral" (viral hepatitis) is still linked to the taxonomy root. It is caused by the fact that there is no similarity value between the "viral hepatitis" term and any of the term of the other concept agents. Figure 9: Concept agent tree after activation of the head coverage rule After activating the head coverage rule and letting the system stabilize again we obtain figure 9. We can see that "viral hepatitis" slipped through the branch leading to "hepatitis" and chose it as its new parent. It is a sensible default choice since "viral hepatitis" is a more specific term than "hepatitis". This rule tends to push agents described by a set of term to become leafs of the concept tree. It addresses our concern to improve the low level structuring of our taxonomy. But obviously our agents lack a way to backtrack in case of modifications in the taxonomy which would make them be located in the wrong branch. That is one of the point where our system still has to be improved by adding another set of rules. 4.2 On Using Several Criteria In the previous sections and examples, we only used one algorithm at a time. The distributed clustering algorithm tends to introduce new layers in the taxonomy, while the head coverage algorithm tends to push some of the agents toward the leafs of the taxonomy. It obviously raises the question on how to deal with multiple criteria in our taxonomy building, and how agents determine their priorities at a given time. The solution we chose came from the search for minimizing non cooperation within the system in accordance with the ADELFE method. Each agent computes three non cooperation degrees and chooses its current priority depending on which degree is the highest. For a given agent A having a parent P, a set of brothers Bi and which received a set of messages Mk having the priority pk the three non cooperation degrees are: • μH (A) = 1 − a (P, A), is the "head coverage" non cooperation degree, determined by the head coverage of the parent, • μB (A) = max (1 − similarity (A, Bi)), is the "brotherhood" non cooperation degree, determined by the worst brother of A regarding similarities, • μM (A) = max (pk), is the "message" non cooperation de gree, determined by the most urgent message received. Then, the non cooperation degree μ (A) of agent A is: Then, we have three cases determining which kind of action A will choose: • if μ (A) = μH (A) then A will use the head coverage algorithm we detailed in the previous subsection • if μ (A) = μB (A) then A will use the distributed clustering algorithm (see section 3) • if μ (A) = μM (A) then A will process Mk immediately in order to help its sender Those three cases summarize the current activities of our agents: they have to find the best parent for them (μ (A) = μH (A)), improve the structuring through clustering (μ (A) = μB (A)) and process other agent messages (μ (A) = μM (A)) in order to help them fulfill their own goals. 4.3 Experimental Complexity Revisited We evaluated the experimental complexity of the whole multiagent system when all the rules are activated. In this case, the metric used is the number of messages exchanged in the system. Once again the system has been executed with input data sets ranging from ten to one hundred terms. The given value is the average of message amount sent in the system as a whole for one hundred runs without user interaction. It results in the plots of figure 10. Curve number 1 represents the average of the value obtained. Curve number 2 represents the average of the value obtained when only the distributed clustering algorithm is activated, not the full rule set. Curve number 3 represents the polynomial minimizing the error with curve number 1. The highest degree term of this polynomial is in n3, then our multi-agent system has a O (n3) complexity 1290 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 10: Experimental results on average. Moreover, let's note the very small variation of the average performances with the maximum and the minimum. In the worst case for 100 terms, the variation is of 126.73 for an average of 20,737.03 (around 0.6%) which proves the excellent stability of the system. Finally the extra head coverage rules are a real improvement on the distributed algorithm alone. They introduce more constraints and stability point is reached with less interactions and decision making by the agents. It means that less messages are exchanged in the system while obtaining a tree of higher quality for the ontologist. 5. DISCUSSION & PERSPECTIVES 5.1 Current Limitation of our Approach The most important limitation of our current algorithm is that the result depends on the order the data gets added. When the system works by itself on a fixed data set given during initialization, the final result is equivalent to what we could obtain with a centralized algorithm. On the contrary, adding a new item after a first stabilization has an impact on the final result. Figure 11: Concept agent tree after autonomous stabilization of the system To illustrate our claims, we present another example of the working system. By using test data and letting the system work by itself, we obtain the hierarchy of figure 11 after stabilization. Figure 12: Concept agent tree after taking in account "hepatitis" Then, the ontologist interacts with the system and adds a new concept described by the term "hepatitis" and linked to the root. The system reacts and stabilizes, we then obtain figure 12 as a result. "hepatitis" is located in the right branch, but we have not obtained the same organization as the figure 6 of the previous example. We need to improve our distributed algorithm to allow a concept to move along a branch. We are currently working on the required rules, but the comparison with centralized algorithm will become very difficult. In particular since they will take into account criteria ignored by the centralized algorithm. 5.2 Pruning for Ontologies Building In section 3, we presented the distributed clustering algorithm used in the Dynamo system. Since this work was first based on this algorithm, it introduced a clear bias toward binary trees as a result. But we have to keep in mind that we are trying to obtain taxonomies which are more refined and concise. Although the head coverage rule is an improvement because it is based on how the ontologists generally work, it only addresses low level structuring but not the intermediate levels of the tree. By looking at figure 7, it is clear that some pruning could be done in the taxonomy. In particular, since "lésion" moved, "ConceptAgent:9" could be removed, it is not needed anymore. Moreover the branch starting with "ConceptAgent:8" clearly respects the constraint to make a binary tree, but it would be more useful to the user in a more compact and meaningful form. In this case "ConceptAgent:10" and "ConceptAgent:11" could probably be merged. Currently, our system has the necessary rules to create intermediate levels in the taxonomy, or to have concepts shifting towards the leaf. As we pointed, it is not enough, so new rules are needed to allow removing nodes from the tree, or move them toward the root. Most of the work needed to develop those rules consists in finding the relevant statistic information that will support the ontologist. 6. CONCLUSION After being presented as a promising solution, ensuring model quality and their terminological richness, ontology building from textual corpus analysis is difficult and costly. It requires analyst supervising and taking in account the ontology aim. Using natural languages processing tools ease the knowledge localization in texts through language uses. That said, those tools produce a huge amount of lexical or grammatical data which is not trivial to examine in order to define conceptual elements. Our contribution lies in this step of the modeling process from texts, before any attempts to normalize or formalize the result. We proposed an approach based on an adaptive multi-agent system to provide the ontologist with a first taxonomic structure of concepts. Our system makes use of a terminological network resulting from an analysis made by Syntex. The current state of our software allows to produce simple structures, to propose them to the ontologist and to make them evolve depending on the modifications he made. Performances of the system are interesting and some aspects are even comparable to their centralized counterpart. Its strengths are mostly qualitative since it allows more subtle user interactions and a progressive adaptation to new linguistic based information. From the point of view of ontology building, this work is a first step showing the relevance of our approach. It must continue, both to ensure a better robustness during classification, and to obtain richer structures semantic wise than simple trees. From this improvements we are mostly focusing on the pruning to obtain better taxonomies. We're currently working on the criterion to trigger the complementary actions of the structure changes applied by our clustering algorithm. In other words this algorithm introduces in The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291 termediate levels, and we need to be able to remove them if necessary, in order to reach a dynamic equilibrium. Also from the multi-agent engineering point of view, their use in a dynamic ontology context has shown its relevance. This dynamic ontologies can be seen as complex problem solving, in such a case self-organization through cooperation has been an efficient solution. And, more generally it's likely to be interesting for other design related tasks, even if we're focusing only on knowledge engineering in this paper. Of course, our system still requires more evaluation and validation work to accurately determine the advantages and flaws of this approach. We're planning to work on such benchmarking in the near future.
A Multi-Agent System for Building Dynamic Ontologies ABSTRACT Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning. Our system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture. In this paper we present a distributed hierarchical clustering algorithm, core of our approach. It is evaluated and compared to a more conventional centralized algorithm. We also present how it has been improved using a multi-criteria approach. With those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution. 1. INTRODUCTION Nowadays, it is well established that ontologies are needed for semantic web, knowledge management, B2B...For knowledge management, ontologies are used to annotate documents and to enhance the information retrieval. But building an ontology manually is a slow, tedious, costly, complex and time consuming process. Currently, a real challenge lies in building them automatically or semi-automatically and keeping them up to date. It would mean creating dynamic ontologies [10] and it justifies the emergence of ontology learning techniques [14] [13]. Our research focuses on Dynamo (an acronym of DYNAMic Ontologies), a tool based on an adaptive multi-agent system to construct and maintain an ontology from a domain specific set of texts. * PhD student 2. DYNAMO OVERVIEW 2.1 Ontology as a Multi-Agent System 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS 2.2 Proposed Architecture 3. DISTRIBUTED CLUSTERING The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1287 3.1 Quantitative Evaluation 1288 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.2 Qualitative Evaluation 4. MULTI-CRITERIA HIERARCHY The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1289 4.1 Adding Head Coverage Rules 4.2 On Using Several Criteria 4.3 Experimental Complexity Revisited 1290 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. DISCUSSION & PERSPECTIVES 5.1 Current Limitation of our Approach 5.2 Pruning for Ontologies Building 6. CONCLUSION After being presented as a promising solution, ensuring model quality and their terminological richness, ontology building from textual corpus analysis is difficult and costly. It requires analyst supervising and taking in account the ontology aim. Using natural languages processing tools ease the knowledge localization in texts through language uses. That said, those tools produce a huge amount of lexical or grammatical data which is not trivial to examine in order to define conceptual elements. Our contribution lies in this step of the modeling process from texts, before any attempts to normalize or formalize the result. We proposed an approach based on an adaptive multi-agent system to provide the ontologist with a first taxonomic structure of concepts. Our system makes use of a terminological network resulting from an analysis made by Syntex. The current state of our software allows to produce simple structures, to propose them to the ontologist and to make them evolve depending on the modifications he made. Performances of the system are interesting and some aspects are even comparable to their centralized counterpart. Its strengths are mostly qualitative since it allows more subtle user interactions and a progressive adaptation to new linguistic based information. From the point of view of ontology building, this work is a first step showing the relevance of our approach. It must continue, both to ensure a better robustness during classification, and to obtain richer structures semantic wise than simple trees. From this improvements we are mostly focusing on the pruning to obtain better taxonomies. We're currently working on the criterion to trigger the complementary actions of the structure changes applied by our clustering algorithm. In other words this algorithm introduces in The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291 termediate levels, and we need to be able to remove them if necessary, in order to reach a dynamic equilibrium. Also from the multi-agent engineering point of view, their use in a dynamic ontology context has shown its relevance. This dynamic ontologies can be seen as complex problem solving, in such a case self-organization through cooperation has been an efficient solution. And, more generally it's likely to be interesting for other design related tasks, even if we're focusing only on knowledge engineering in this paper. Of course, our system still requires more evaluation and validation work to accurately determine the advantages and flaws of this approach. We're planning to work on such benchmarking in the near future.
A Multi-Agent System for Building Dynamic Ontologies ABSTRACT Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning. Our system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture. In this paper we present a distributed hierarchical clustering algorithm, core of our approach. It is evaluated and compared to a more conventional centralized algorithm. We also present how it has been improved using a multi-criteria approach. With those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution. 1. INTRODUCTION Nowadays, it is well established that ontologies are needed for semantic web, knowledge management, B2B...For knowledge management, ontologies are used to annotate documents and to enhance the information retrieval. But building an ontology manually is a slow, tedious, costly, complex and time consuming process. It would mean creating dynamic ontologies [10] and it justifies the emergence of ontology learning techniques [14] [13]. Our research focuses on Dynamo (an acronym of DYNAMic Ontologies), a tool based on an adaptive multi-agent system to construct and maintain an ontology from a domain specific set of texts. 6. CONCLUSION After being presented as a promising solution, ensuring model quality and their terminological richness, ontology building from textual corpus analysis is difficult and costly. It requires analyst supervising and taking in account the ontology aim. Using natural languages processing tools ease the knowledge localization in texts through language uses. Our contribution lies in this step of the modeling process from texts, before any attempts to normalize or formalize the result. We proposed an approach based on an adaptive multi-agent system to provide the ontologist with a first taxonomic structure of concepts. Our system makes use of a terminological network resulting from an analysis made by Syntex. Performances of the system are interesting and some aspects are even comparable to their centralized counterpart. From the point of view of ontology building, this work is a first step showing the relevance of our approach. It must continue, both to ensure a better robustness during classification, and to obtain richer structures semantic wise than simple trees. From this improvements we are mostly focusing on the pruning to obtain better taxonomies. We're currently working on the criterion to trigger the complementary actions of the structure changes applied by our clustering algorithm. In other words this algorithm introduces in The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291 termediate levels, and we need to be able to remove them if necessary, in order to reach a dynamic equilibrium. Also from the multi-agent engineering point of view, their use in a dynamic ontology context has shown its relevance. This dynamic ontologies can be seen as complex problem solving, in such a case self-organization through cooperation has been an efficient solution. And, more generally it's likely to be interesting for other design related tasks, even if we're focusing only on knowledge engineering in this paper. Of course, our system still requires more evaluation and validation work to accurately determine the advantages and flaws of this approach.
I-71
A Formal Model for Situated Semantic Alignment
Ontology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems. Most ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources. In this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in. It hence makes the situation in which the alignment occurs explicit in the model. We resort to Channel Theory to carry out the formalisation.
[ "semant align", "ontolog", "multi-agent system", "feder databas", "semant web", "knowledg-base system", "disjoint union", "sum infomorph", "constraint", "inform-channel refin", "distribut logic", "channel refin" ]
[ "P", "P", "M", "U", "M", "M", "U", "U", "U", "U", "M", "M" ]
A Formal Model for Situated Semantic Alignment Manuel Atencia Marco Schorlemmer IIIA, Artificial Intelligence Research Institute CSIC, Spanish National Research Council Bellaterra (Barcelona), Catalonia, Spain {manu, marco}@iiia. csic.es ABSTRACT Ontology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems. Most ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources. In this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in. It hence makes the situation in which the alignment occurs explicit in the model. We resort to Channel Theory to carry out the formalisation. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-coherence and coordination, multiagent systems; D.2.12 [Software Engineering]: Interoperability-data mapping; I.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods-semantic networks, relation systems. General Terms Theory 1. INTRODUCTION An ontology is commonly defined as a specification of the conceptualisation of a particular domain. It fixes the vocabulary used by knowledge engineers to denote concepts and their relations, and it constrains the interpretation of this vocabulary to the meaning originally intended by knowledge engineers. As such, ontologies have been widely adopted as a key technology that may favour knowledge sharing in distributed environments, such as multi-agent systems, federated databases, or the Semantic Web. But the proliferation of many diverse ontologies caused by different conceptualisations of even the same domain -and their subsequent specification using varying terminology- has highlighted the need of ontology matching techniques that are capable of computing semantic relationships between entities of separately engineered ontologies. [5, 11] Until recently, most ontology matching mechanisms developed so far have taken a classical functional approach to the semantic heterogeneity problem, in which ontology matching is seen as a process taking two or more ontologies as input and producing a semantic alignment of ontological entities as output [3]. Furthermore, matching often has been carried out at design-time, before integrating knowledge-based systems or making them interoperate. This might have been successful for clearly delimited and stable domains and for closed distributed systems, but it is untenable and even undesirable for the kind of applications that are currently deployed in open systems. Multi-agent communication, peer-to-peer information sharing, and webservice composition are all of a decentralised, dynamic, and open-ended nature, and they require ontology matching to be locally performed during run-time. In addition, in many situations peer ontologies are not even open for inspection (e.g., when they are based on commercially confidential information). Certainly, there exist efforts to efficiently match ontological entities at run-time, taking only those ontology fragment that are necessary for the task at hand [10, 13, 9, 8]. Nevertheless, the techniques used by these systems to establish the semantic relationships between ontological entities -even though applied at run-time- still exploit a priori defined concept taxonomies as they are represented in the graph-based structures of the ontologies to be matched, use previously existing external sources such as thesauri (e.g., WordNet) and upper-level ontologies (e.g., CyC or SUMO), or resort to additional background knowledge repositories or shared instances. We claim that semantic alignment of ontological terminology is ultimately relative to the particular situation in which the alignment is carried out, and that this situation should be made explicit and brought into the alignment mechanism. Even two agents with identical conceptualisation capabilities, and using exactly the same vocabulary to specify their respective conceptualisations may fail to interoperate 1278 978-81-904262-7-5 (RPS) c 2007 IFAAMAS in a concrete situation because of their differing perception of the domain. Imagine a situation in which two agents are facing each other in front of a checker board. Agent A1 may conceptualise a figure on the board as situated on the left margin of the board, while agent A2 may conceptualise the same figure as situated on the right. Although the conceptualisation of `left'' and `right'' is done in exactly the same manner by both agents, and even if both use the terms left and right in their communication, they still will need to align their respective vocabularies if they want to successfully communicate to each other actions that change the position of figures on the checker board. Their semantic alignment, however, will only be valid in the scope of their interaction within this particular situation or environment. The same agents situated differently may produce a different alignment. This scenario is reminiscent to those in which a group of distributed agents adapt to form an ontology and a shared lexicon in an emergent, bottom-up manner, with only local interactions and no central control authority [12]. This sort of self-organised emergence of shared meaning is namely ultimately grounded on the physical interaction of agents with the environment. In this paper, however, we address the case in which agents are already endowed with a top-down engineered ontology (it can even be the same one), which they do not adapt or refine, but for which they want to find the semantic relationships with separate ontologies of other agents on the grounds of their communication within a specific situation. In particular, we provide a formal model that formalises situated semantic alignment as a sequence of information-channel refinements in the sense of Barwise and Seligman``s theory of information flow [1]. This theory is particularly useful for our endeavour because it models the flow of information occurring in distributed systems due to the particular situations -or tokens- that carry information. Analogously, the semantic alignment that will allow information to flow ultimately will be carried by the particular situation agents are acting in. We shall therefore consider a scenario with two or more agents situated in an environment. Each agent will have its own viewpoint of the environment so that, if the environment is in a concrete state, both agents may have different perceptions of this state. Because of these differences there may be a mismatch in the meaning of the syntactic entities by which agents describe their perceptions (and which constitute the agents'' respective ontologies). We state that these syntactic entities can be related according to the intrinsic semantics provided by the existing relationship between the agents'' viewpoint of the environment. The existence of this relationship is precisely justified by the fact that the agents are situated and observe the same environment. In Section 2 we describe our formal model for Situated Semantic Alignment (SSA). First, in Section 2.1 we associate a channel to the scenario under consideration and show how the distributed logic generated by this channel provides the logical relationships between the agents'' viewpoints of the environment. Second, in Section 2.2 we present a method by which agents obtain approximations of this distributed logic. These approximations gradually become more reliable as the method is applied. In Section 3 we report on an application of our method. Conclusions and further work are analyzed in Section 4. Finally, an appendix summarizes the terms and theorems of Channel theory used along the paper. We do not assume any knowledge of Channel Theory; we restate basic definitions and theorems in the appendix, but any detailed exposition of the theory is outside the scope of this paper. 2. A FORMAL MODEL FOR SSA 2.1 The Logic of SSA Consider a scenario with two agents A1 and A2 situated in an environment E (the generalization to any numerable set of agents is straightforward). We associate a numerable set S of states to E and, at any given instant, we suppose E to be in one of these states. We further assume that each agent is able to observe the environment and has its own perception of it. This ability is faithfully captured by a surjective function seei : S → Pi, where i ∈ {1, 2}, and typically see1 and see2 are different. According to Channel Theory, information is only viable where there is a systematic way of classifying some range of things as being this way or that, in other words, where there is a classification (see appendix A). So in order to be within the framework of Channel Theory, we must associate classifications to the components of our system. For each i ∈ {1, 2}, we consider a classification Ai that models Ai``s viewpoint of E. First, tok(Ai) is composed of Ai``s perceptions of E states, that is, tok(Ai) = Pi. Second, typ(Ai) contains the syntactic entities by which Ai describes its perceptions, the ones constituting the ontology of Ai. Finally, |=Ai synthesizes how Ai relates its perceptions with these syntactic entities. Now, with the aim of associating environment E with a classification E we choose the power classification of S as E, which is the classification whose set of types is equal to 2S , whose tokens are the elements of S, and for which a token e is of type ε if e ∈ ε. The reason for taking the power classification is because there are no syntactic entities that may play the role of types for E since, in general, there is no global conceptualisation of the environment. However, the set of types of the power classification includes all possible token configurations potentially described by types. Thus tok(E) = S, typ(E) = 2S and e |=E ε if and only if e ∈ ε. The notion of channel (see appendix A) is fundamental in Barwise and Seligman``s theory. The information flow among the components of a distributed system is modelled in terms of a channel and the relationships among these components are expressed via infomorphisms (see appendix A) which provide a way of moving information between them. The information flow of the scenario under consideration is accurately described by channel E = {fi : Ai → E}i∈{1,2} defined as follows: • ˆfi(α) = {e ∈ tok(E) | seei(e) |=Ai α} for each α ∈ typ(Ai) • ˇfi(e) = seei(e) for each e ∈ tok(E) where i ∈ {1, 2}. Definition of ˇfi seems natural while ˆfi is defined in such a way that the fundamental property of the infomorphisms is fulfilled: ˇfi(e) |=Ai α iff seei(e) |=Ai α (by definition of ˇfi) iff e ∈ ˆfi(α) (by definition of ˆfi) iff e |=E ˆfi(α) (by definition of |=E) The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1279 Consequently, E is the core of channel E and a state e ∈ tok(E) connects agents'' perceptions ˇf1(e) and ˇf2(e) (see Figure 1). typ(E) typ(A1) ˆf1 99ttttttttt typ(A2) ˆf2 eeJJJJJJJJJ tok(E) |=E ˇf1yyttttttttt ˇf2 %%JJJJJJJJJ tok(A1) |=A1 tok(A2) |=A2 Figure 1: Channel E E explains the information flow of our scenario by virtue of agents A1 and A2 being situated and perceiving the same environment E. We want to obtain meaningful relations among agents'' syntactic entities, that is, agents'' types. We state that meaningfulness must be in accord with E. The sum operation (see appendix A) gives us a way of putting the two agents'' classifications of channel E together into a single classification, namely A1 +A2, and also the two infomorphisms together into a single infomorphism, f1 +f2 : A1 + A2 → E. A1 + A2 assembles agents'' classifications in a very coarse way. tok(A1 + A2) is the cartesian product of tok(A1) and tok(A2), that is, tok(A1 + A2) = { p1, p2 | pi ∈ Pi}, so a token of A1 + A2 is a pair of agents'' perceptions with no restrictions. typ(A1 + A2) is the disjoint union of typ(A1) and typ(A2), and p1, p2 is of type i, α if pi is of type α. We attach importance to take the disjoint union because A1 and A2 could use identical types with the purpose of describing their respective perceptions of E. Classification A1 + A2 seems to be the natural place in which to search for relations among agents'' types. Now, Channel Theory provides a way to make all these relations explicit in a logical fashion by means of theories and local logics (see appendix A). The theory generated by the sum classification, Th(A1 + A2), and hence its logic generated, Log(A1 + A2), involve all those constraints among agents'' types valid according to A1 +A2. Notice however that these constraints are obvious. As we stated above, meaningfulness must be in accord with channel E. Classifications A1 + A2 and E are connected via the sum infomorphism, f = f1 + f2, where: • ˆf( i, α ) = ˆfi(α) = {e ∈ tok(E) | seei(e) |=Ai α} for each i, α ∈ typ(A1 + A2) • ˇf(e) = ˇf1(e), ˇf2(e) = see1(e), see2(e) for each e ∈ tok(E) Meaningful constraints among agents'' types are in accord with channel E because they are computed making use of f as we expound below. As important as the notion of channel is the concept of distributed logic (see appendix A). Given a channel C and a logic L on its core, DLogC(L) represents the reasoning about relations among the components of C justified by L. If L = Log(C), the distributed logic, we denoted by Log(C), captures in a logical fashion the information flow inherent in the channel. In our case, Log(E) explains the relationship between the agents'' viewpoints of the environment in a logical fashion. On the one hand, constraints of Th(Log(E)) are defined by: Γ Log(E) Δ if ˆf[Γ] Log(E) ˆf[Δ] (1) where Γ, Δ ⊆ typ(A1 + A2). On the other hand, the set of normal tokens, NLog(E), is equal to the range of function ˇf: NLog(E) = ˇf[tok(E)] = { see1(e), see2(e) | e ∈ tok(E)} Therefore, a normal token is a pair of agents'' perceptions that are restricted by coming from the same environment state (unlike A1 + A2 tokens). All constraints of Th(Log(E)) are satisfied by all normal tokens (because of being a logic). In this particular case, this condition is also sufficient (the proof is straightforward); as alternative to (1) we have: Γ Log(E) Δ iff for all e ∈ tok(E), if (∀ i, γ ∈ Γ)[seei(e) |=Ai γ] then (∃ j, δ ∈ Δ)[seej(e) |=Aj δ] (2) where Γ, Δ ⊆ typ(A1 + A2). Log(E) is the logic of SSA. Th(Log(E)) comprises the most meaningful constraints among agents'' types in accord with channel E. In other words, the logic of SSA contains and also justifies the most meaningful relations among those syntactic entities that agents use in order to describe their own environment perceptions. Log(E) is complete since Log(E) is complete but it is not necessarily sound because although Log(E) is sound, ˇf is not surjective in general (see appendix B). If Log(E) is also sound then Log(E) = Log(A1 +A2) (see appendix B). That means there is no significant relation between agents'' points of view of the environment according to E. It is just the fact that Log(E) is unsound what allows a significant relation between the agents'' viewpoints. This relation is expressed at the type level in terms of constraints by Th(Log(E)) and at the token level by NLog(E). 2.2 Approaching the logic of SSA through communication We have dubbed Log(E) the logic of SSA. Th(Log(E)) comprehends the most meaningful constraints among agents'' types according to E. The problem is that neither agent can make use of this theory because they do not know E completely. In this section, we present a method by which agents obtain approximations to Th(Log(E)). We also prove these approximations gradually become more reliable as the method is applied. Agents can obtain approximations to Th(Log(E)) through communication. A1 and A2 communicate by exchanging information about their perceptions of environment states. This information is expressed in terms of their own classification relations. Specifically, if E is in a concrete state e, we assume that agents can convey to each other which types are satisfied by their respective perceptions of e and which are not. This exchange generates a channel C = {fi : Ai → 1280 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) C}i∈{1,2} and Th(Log(C)) contains the constraints among agents'' types justified by the fact that agents have observed e. Now, if E turns to another state e and agents proceed as before, another channel C = {fi : Ai → C }i∈{1,2} gives account of the new situation considering also the previous information. Th(Log(C )) comprises the constraints among agents'' types justified by the fact that agents have observed e and e . The significant point is that C is a refinement of C (see appendix A). Theorem 2.1 below ensures that the refined channel involves more reliable information. The communication supposedly ends when agents have observed all the environment states. Again this situation can be modeled by a channel, call it C∗ = {f∗ i : Ai → C∗ }i∈{1,2}. Theorem 2.2 states that Th(Log(C∗ )) = Th(Log(E)). Theorem 2.1 and Theorem 2.2 assure that applying the method agents can obtain approximations to Th(Log(E)) gradually more reliable. Theorem 2.1. Let C = {fi : Ai → C}i∈{1,2} and C = {fi : Ai → C }i∈{1,2} be two channels. If C is a refinement of C then: 1. Th(Log(C )) ⊆ Th(Log(C)) 2. NLog(C ) ⊇ NLog(C) Proof. Since C is a refinement of C then there exists a refinement infomorphism r from C to C; so fi = r ◦ fi . Let A =def A1 + A2, f =def f1 + f2 and f =def f1 + f2. 1. Let Γ and Δ be subsets of typ(A) and assume that Γ Log(C ) Δ, which means ˆf [Γ] C ˆf [Δ]. We have to prove Γ Log(C) Δ, or equivalently, ˆf[Γ] C ˆf[Δ]. We proceed by reductio ad absurdum. Suppose c ∈ tok(C) does not satisfy the sequent ˆf[Γ], ˆf[Δ] . Then c |=C ˆf(γ) for all γ ∈ Γ and c |=C ˆf(δ) for all δ ∈ Δ. Let us choose an arbitrary γ ∈ Γ. We have that γ = i, α for some α ∈ typ(Ai) and i ∈ {1, 2}. Thus ˆf(γ) = ˆf( i, α ) = ˆfi(α) = ˆr ◦ ˆfi (α) = ˆr( ˆfi (α)). Therefore: c |=C ˆf(γ) iff c |=C ˆr( ˆfi (α)) iff ˇr(c) |=C ˆfi (α) iff ˇr(c) |=C ˆf ( i, α ) iff ˇr(c) |=C ˆf (γ) Consequently, ˇr(c) |=C ˆf (γ) for all γ ∈ Γ. Since ˆf [Γ] C ˆf [Δ] then there exists δ∗ ∈ Δ such that ˇr(c) |=C ˆf (δ∗ ). A sequence of equivalences similar to the above one justifies c |=C ˆf(δ∗ ), contradicting that c is a counterexample to ˆf[Γ], ˆf[Δ] . Hence Γ Log(C) Δ as we wanted to prove. 2. Let a1, a2 ∈ tok(A) and assume a1, a2 ∈ NLog(C). Therefore, there exists c token in C such that a1, a2 = ˇf(c). Then we have ai = ˇfi(c) = ˇfi ◦ ˇr(c) = ˇfi (ˇr(c)), for i ∈ {1, 2}. Hence a1, a2 = ˇf (ˇr(c)) and a1, a2 ∈ NLog(C ). Consequently, NLog(C ) ⊇ NLog(C) which concludes the proof. Remark 2.1. Theorem 2.1 asserts that the more refined channel gives more reliable information. Even though its theory has less constraints, it has more normal tokens to which they apply. In the remainder of the section, we explicitly describe the process of communication and we conclude with the proof of Theorem 2.2. Let us assume that typ(Ai) is finite for i ∈ {1, 2} and S is infinite numerable, though the finite case can be treated in a similar form. We also choose an infinite numerable set of symbols {cn | n ∈ N}1 . We omit informorphisms superscripts when no confusion arises. Types are usually denoted by greek letters and tokens by latin letters so if f is an infomorphism, f(α) ≡ ˆf(α) and f(a) ≡ ˇf(a). Agents communication starts from the observation of E. Let us suppose that E is in state e1 ∈ S = tok(E). A1``s perception of e1 is f1(e1 ) and A2``s perception of e1 is f2(e1 ). We take for granted that A1 can communicate A2 those types that are and are not satisfied by f1(e1 ) according to its classification A1. So can A2 do. Since both typ(A1) and typ(A2) are finite, this process eventually finishes. After this communication a channel C1 = {f1 i : Ai → C1 }i=1,2 arises (see Figure 2). C1 A1 f1 1 ==|||||||| A2 f1 2 aaCCCCCCCC Figure 2: The first communication stage On the one hand, C1 is defined by: • tok(C1 ) = {c1 } • typ(C1 ) = typ(A1 + A2) • c1 |=C1 i, α if fi(e1 ) |=Ai α (for every i, α ∈ typ(A1 + A2)) On the other hand, f1 i , with i ∈ {1, 2}, is defined by: • f1 i (α) = i, α (for every α ∈ typ(Ai)) • f1 i (c1 ) = fi(e1 ) Log(C1 ) represents the reasoning about the first stage of communication. It is easy to prove that Th(Log(C1 )) = Th(C1 ). The significant point is that both agents know C1 as the result of the communication. Hence they can compute separately theory Th(C1 ) = typ(C1 ), C1 which contains the constraints among agents'' types justified by the fact that agents have observed e1 . Now, let us assume that E turns to a new state e2 . Agents can proceed as before, exchanging this time information about their perceptions of e2 . Another channel C2 = {f2 i : Ai → C2 }i∈{1,2} comes up. We define C2 so as to take also into account the information provided by the previous stage of communication. On the one hand, C2 is defined by: • tok(C2 ) = {c1 , c2 } 1 We write these symbols with superindices because we limit the use of subindices for what concerns to agents. Note this set is chosen with the same cardinality of S. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1281 • typ(C2 ) = typ(A1 + A2) • ck |=C2 i, α if fi(ek ) |=Ai α (for every k ∈ {1, 2} and i, α ∈ typ(A1 + A2)) On the other hand, f2 i , with i ∈ {1, 2}, is defined by: • f2 i (α) = i, α (for every α ∈ typ(Ai)) • f2 i (ck ) = fi(ek ) (for every k ∈ {1, 2}) Log(C2 ) represents the reasoning about the former and the later communication stages. Th(Log(C2 )) is equal to Th(C2 ) = typ(C2 ), C2 , then it contains the constraints among agents'' types justified by the fact that agents have observed e1 and e2 . A1 and A2 knows C2 so they can use these constraints. The key point is that channel C2 is a refinement of C1 . It is easy to check that f1 defined as the identity function on types and the inclusion function on tokens is a refinement infomorphism (see at the bottom of Figure 3). By Theorem 2.1, C2 constraints are more reliable than C1 constraints. In the general situation, once the states e1 , e2 , ... , en−1 (n ≥ 2) have been observed and a new state en appears, channel Cn = {fn i : Ai → Cn }i∈{1,2} informs about agents communication up to that moment. Cn definition is similar to the previous ones and analogous remarks can be made (see at the top of Figure 3). Theory Th(Log(Cn )) = Th(Cn ) = typ(Cn ), Cn contains the constraints among agents'' types justified by the fact that agents have observed e1 , e2 , ... , en . Cn fn−1 A1 fn−1 1 99PPPPPPPPPPPPP fn 1 UUnnnnnnnnnnnnn f2 1 %%44444444444444444444444444 f1 1 '''',,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, A2 fn 2 ggPPPPPPPPPPPPP fn−1 2 wwnnnnnnnnnnnnn f2 2 ÕÕ f1 2 ØØ Cn−1 . . . C2 f1 C1 Figure 3: Agents communication Remember we have assumed that S is infinite numerable. It is therefore unpractical to let communication finish when all environment states have been observed by A1 and A2. At that point, the family of channels {Cn }n∈N would inform of all the communication stages. It is therefore up to the agents to decide when to stop communicating should a good enough approximation have been reached for the purposes of their respective tasks. But the study of possible termination criteria is outside the scope of this paper and left for future work. From a theoretical point of view, however, we can consider the channel C∗ = {f∗ i : Ai → C∗ }i∈{1,2} which informs of the end of the communication after observing all environment states. On the one hand, C∗ is defined by: • tok(C∗ ) = {cn | n ∈ N} • typ(C∗ ) = typ(A1 + A2) • cn |=C∗ i, α if fi(en ) |=Ai α (for n ∈ N and i, α ∈ typ(A1 + A2)) On the other hand, f∗ i , with i ∈ {1, 2}, is defined by: • f∗ i (α) = i, α (for α ∈ typ(Ai)) • f∗ i (cn ) = fi(en ) (for n ∈ N) Theorem below constitutes the cornerstone of the model exposed in this paper. It ensures, together with Theorem 2.1, that at each communication stage agents obtain a theory that approximates more closely to the theory generated by the logic of SSA. Theorem 2.2. The following statements hold: 1. For all n ∈ N, C∗ is a refinement of Cn . 2. Th(Log(E)) = Th(C∗ ) = Th(Log(C∗ )). Proof. 1. It is easy to prove that for each n ∈ N, gn defined as the identity function on types and the inclusion function on tokens is a refinement infomorphism from C∗ to Cn . 2. The second equality is straightforward; the first one follows directly from: cn |=C∗ i, α iff ˇfi(en ) |=Ai α (by definition of |=C∗ ) iff en |=E ˆfi(α) (because fi is infomorphim) iff en |=E ˆf( i, α ) (by definition of ˆf) E C∗ gn A1 fn 1 99OOOOOOOOOOOOO f∗ 1 UUooooooooooooo f1 cc A2 f∗ 2 ggOOOOOOOOOOOOO fn 2 wwooooooooooooo f2 ????????????????? Cn 1282 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. AN EXAMPLE In the previous section we have described in great detail our formal model for SSA. However, we have not tackled the practical aspect of the model yet. In this section, we give a brushstroke of the pragmatic view of our approach. We study a very simple example and explain how agents can use those approximations of the logic of SSA they can obtain through communication. Let us reflect on a system consisting of robots located in a two-dimensional grid looking for packages with the aim of moving them to a certain destination (Figure 4). Robots can carry only one package at a time and they can not move through a package. Figure 4: The scenario Robots have a partial view of the domain and there exist two kinds of robots according to the visual field they have. Some robots are capable of observing the eight adjoining squares but others just observe the three squares they have in front (see Figure 5). We call them URDL (shortened form of Up-Right-Down-Left) and LCR (abbreviation for Left-Center-Right) robots respectively. Describing the environment states as well as the robots'' perception functions is rather tedious and even unnecessary. We assume the reader has all those descriptions in mind. All robots in the system must be able to solve package distribution problems cooperatively by communicating their intentions to each other. In order to communicate, agents send messages using some ontology. In our scenario, there coexist two ontologies, the UDRL and LCR ontologies. Both of them are very simple and are just confined to describe what robots observe. Figure 5: Robots field of vision When a robot carrying a package finds another package obstructing its way, it can either go around it or, if there is another robot in its visual field, ask it for assistance. Let us suppose two URDL robots are in a situation like the one depicted in Figure 6. Robot1 (the one carrying a package) decides to ask Robot2 for assistance and sends a request. This request is written below as a KQML message and it should be interpreted intuitively as: Robot2, pick up the package located in my Up square, knowing that you are located in my Up-Right square. ` request :sender Robot1 :receiver Robot2 :language Packages distribution-language :ontology URDL-ontology :content (pick up U(Package) because UR(Robot2) ´ Figure 6: Robot assistance Robot2 understands the content of the request and it can use a rule represented by the following constraint: 1, UR(Robot2) , 2, UL(Robot1) , 1, U(Package) 2, U(Package) The above constraint should be interpreted intuitively as: if Robot2 is situated in Robot1``s Up-Right square, Robot1 is situated in Robot2``s Up-Left square and a package is located in Robot1``s Up square, then a package is located in Robot2``s Up square. Now, problems arise when a LCR robot and a URDL robot try to interoperate. See Figure 7. Robot1 sends a request of the form: ` request :sender Robot1 :receiver Robot2 :language Packages distribution-language :ontology LCR-ontology :content (pick up R(Robot2) because C(Package) ´ Robot2 does not understand the content of the request but they decide to begin a process of alignment -corresponding with a channel C1 . Once finished, Robot2 searches in Th(C1 ) for constraints similar to the expected one, that is, those of the form: 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C1 2, λ(Package) where λ ∈ {U, R, D, L, UR, DR, DL, UL}. From these, only the following constraints are plausible according to C1 : The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1283 Figure 7: Ontology mismatch 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C1 2, U(Package) 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C1 2, L(Package) 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C1 2, DR(Package) If subsequently both robots adopting the same roles take part in a situation like the one depicted in Figure 8, a new process of alignment -corresponding with a channel C2 - takes place. C2 also considers the previous information and hence refines C1 . The only constraint from the above ones that remains plausible according to C2 is : 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C2 2, U(Package) Notice that this constraint is an element of the theory of the distributed logic. Agents communicate in order to cooperate successfully and success is guaranteed using constrains of the distributed logic. Figure 8: Refinement 4. CONCLUSIONS AND FURTHER WORK In this paper we have exposed a formal model of semantic alignment as a sequence of information-channel refinements that are relative to the particular states of the environment in which two agents communicate and align their respective conceptualisations of these states. Before us, Kent [6] and Kalfoglou and Schorlemmer [4, 10] have applied Channel Theory to formalise semantic alignment using also Barwise and Seligman``s insight to focus on tokens as the enablers of information flow. Their approach to semantic alignment, however, like most ontology matching mechanisms developed to date (regardless of whether they follow a functional, design-time-based approach, or an interaction-based, runtime-based approach), still defines semantic alignment in terms of a priori design decisions such as the concept taxonomy of the ontologies or the external sources brought into the alignment process. Instead the model we have presented in this paper makes explicit the particular states of the environment in which agents are situated and are attempting to gradually align their ontological entities. In the future, our effort will focus on the practical side of the situated semantic alignment problem. We plan to further refine the model presented here (e.g., to include pragmatic issues such as termination criteria for the alignment process) and to devise concrete ontology negotiation protocols based on this model that agents may be able to enact. The formal model exposed in this paper will constitute a solid base of future practical results. Acknowledgements This work is supported under the UPIC project, sponsored by Spain``s Ministry of Education and Science under grant number TIN2004-07461-C02- 02 and also under the OpenKnowledge Specific Targeted Research Project (STREP), sponsored by the European Commission under contract number FP6-027253. Marco Schorlemmer is supported by a Ram´on y Cajal Research Fellowship from Spain``s Ministry of Education and Science, partially funded by the European Social Fund. 5. REFERENCES [1] J. Barwise and J. Seligman. Information Flow: The Logic of Distributed Systems. Cambridge University Press, 1997. [2] C. Ghidini and F. Giunchiglia. Local models semantics, or contextual reasoning = locality + compatibility. Artificial Intelligence, 127(2):221-259, 2001. [3] F. Giunchiglia and P. Shvaiko. Semantic matching. The Knowledge Engineering Review, 18(3):265-280, 2004. [4] Y. Kalfoglou and M. Schorlemmer. IF-Map: An ontology-mapping method based on information-flow theory. In Journal on Data Semantics I, LNCS 2800, 2003. [5] Y. Kalfoglou and M. Schorlemmer. Ontology mapping: The sate of the art. The Knowledge Engineering Review, 18(1):1-31, 2003. [6] R. E. Kent. Semantic integration in the Information Flow Framework. In Semantic Interoperability and Integration, Dagstuhl Seminar Proceedings 04391, 2005. [7] D. Lenat. CyC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11), 1995. [8] V. L´opez, M. Sabou, and E. Motta. PowerMap: Mapping the real Semantic Web on the fly. Proceedings of the ISWC``06, 2006. [9] F. McNeill. Dynamic Ontology Refinement. PhD 1284 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) thesis, School of Informatics, The University of Edinburgh, 2006. [10] M. Schorlemmer and Y. Kalfoglou. Progressive ontology alignment for meaning coordination: An information-theoretic foundation. In 4th Int. Joint Conf. on Autonomous Agents and Multiagent Systems, 2005. [11] P. Shvaiko and J. Euzenat. A survey of schema-based matching approaches. In Journal on Data Semantics IV, LNCS 3730, 2005. [12] L. Steels. The Origins of Ontologies and Communication Conventions in Multi-Agent Systems. In Journal of Autonomous Agents and Multi-Agent Systems, 1(2), 169-194, 1998. [13] J. van Diggelen et al.. ANEMONE: An Effective Minimal Ontology Negotiation Environment In 5th Int. Joint Conf. on Autonomous Agents and Multiagent Systems, 2006 APPENDIX A. CHANNEL THEORY TERMS Classification: is a tuple A = tok(A), typ(A), |=A where tok(A) is a set of tokens, typ(A) is a set of types and |=A is a binary relation between tok(A) and typ(A). If a |=A α then a is said to be of type α. Infomorphism: f : A → B from classifications A to B is a contravariant pair of functions f = ˆf, ˇf , where ˆf : typ(A) → typ(B) and ˇf : tok(B) → tok(A), satisfying the following fundamental property: ˇf(b) |=A α iff b |=B ˆf(α) for each token b ∈ tok(B) and each type α ∈ typ(A). Channel: consists of two infomorphisms C = {fi : Ai → C}i∈{1,2} with a common codomain C, called the core of C. C tokens are called connections and a connection c is said to connect tokens ˇf1(c) and ˇf2(c).2 Sum: given classifications A and B, the sum of A and B, denoted by A + B, is the classification with tok(A + B) = tok(A) × tok(B) = { a, b | a ∈ tok(A) and b ∈ tok(B)}, typ(A + B) = typ(A) typ(B) = { i, γ | i = 1 and γ ∈ typ(A) or i = 2 and γ ∈ typ(B)} and relation |=A+B defined by: a, b |=A+B 1, α if a |=A α a, b |=A+B 2, β if b |=B β Given infomorphisms f : A → C and g : B → C, the sum f + g : A + B → C is defined on types by ˆ(f + g)( 1, α ) = ˆf(α) and ˆ(f + g)( 2, β ) = ˆg(β), and on tokens by ˇ(f + g)(c) = ˇf(c), ˇg(c) . Theory: given a set Σ, a sequent of Σ is a pair Γ, Δ of subsets of Σ. A binary relation between subsets of Σ is called a consequence relation on Σ. A theory is a pair T = Σ, where is a consequence relation on Σ. A sequent Γ, Δ of Σ for which Γ Δ is called a constraint of the theory T. T is regular if it satisfies: 1. Identity: α α 2. Weakening: if Γ Δ, then Γ, Γ Δ, Δ 2 In fact, this is the definition of a binary channel. A channel can be defined with an arbitrary index set. 3. Global Cut: if Γ, Π0 Δ, Π1 for each partition Π0, Π1 of Π (i.e., Π0 ∪ Π1 = Π and Π0 ∩ Π1 = ∅), then Γ Δ for all α ∈ Σ and all Γ, Γ , Δ, Δ , Π ⊆ Σ.3 Theory generated by a classification: let A be a classification. A token a ∈ tok(A) satisfies a sequent Γ, Δ of typ(A) provided that if a is of every type in Γ then it is of some type in Δ. The theory generated by A, denoted by Th(A), is the theory typ(A), A where Γ A Δ if every token in A satisfies Γ, Δ . Local logic: is a tuple L = tok(L), typ(L), |=L , L , NL where: 1. tok(L), typ(L), |=L is a classification denoted by Cla(L), 2. typ(L), L is a regular theory denoted by Th(L), 3. NL is a subset of tok(L), called the normal tokens of L, which satisfy all constraints of Th(L). A local logic L is sound if every token in Cla(L) is normal, that is, NL = tok(L). L is complete if every sequent of typ(L) satisfied by every normal token is a constraint of Th(L). Local logic generated by a classification: given a classification A, the local logic generated by A, written Log(A), is the local logic on A (i.e., Cla(Log(A)) = A), with Th(Log(A)) = Th(A) and such that all its tokens are normal, i.e., NLog(A) = tok(A). Inverse image: given an infomorphism f : A → B and a local logic L on B, the inverse image of L under f, denoted f−1 [L], is the local logic on A such that Γ f−1[L] Δ if ˆf[Γ] L ˆf[Δ] and Nf−1[L] = ˇf[NL ] = {a ∈ tok(A) | a = ˇf(b) for some b ∈ NL }. Distributed logic: let C = {fi : Ai → C}i∈{1,2} be a channel and L a local logic on its core C, the distributed logic of C generated by L, written DLogC(L), is the inverse image of L under the sum f1 + f2. Refinement: let C = {fi : Ai → C}i∈{1,2} and C = {fi : Ai → C }i∈{1,2} be two channels with the same component classifications A1 and A2. A refinement infomorphism from C to C is an infomorphism r : C → C such that for each i ∈ {1, 2}, fi = r ◦fi (i.e., ˆfi = ˆr ◦ ˆfi and ˇfi = ˇfi ◦ˇr). Channel C is a refinement of C if there exists a refinement infomorphism r from C to C. B. CHANNEL THEORY THEOREMS Theorem B.1. The logic generated by a classification is sound and complete. Furthermore, given a classification A and a logic L on A, L is sound and complete if and only if L = Log(A). Theorem B.2. Let L be a logic on a classification B and f : A → B an infomorphism. 1. If L is complete then f−1 [L] is complete. 2. If L is sound and ˇf is surjective then f−1 [L] is sound. 3 All theories considered in this paper are regular. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1285
A Formal Model for Situated Semantic Alignment ABSTRACT Ontology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems. Most ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources. In this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in. It hence makes the situation in which the alignment occurs explicit in the model. We resort to Channel Theory to carry out the formalisation. 1. INTRODUCTION An ontology is commonly defined as a specification of the conceptualisation of a particular domain. It fixes the vocabulary used by knowledge engineers to denote concepts and their relations, and it constrains the interpretation of this vocabulary to the meaning originally intended by knowledge engineers. As such, ontologies have been widely adopted as a key technology that may favour knowledge sharing in distributed environments, such as multi-agent systems, federated databases, or the Semantic Web. But the proliferation of many diverse ontologies caused by different conceptualisations of even the same domain--and their subsequent specification using varying terminology--has highlighted the need of ontology matching techniques that are capable of computing semantic relationships between entities of separately engineered ontologies. [5, 11] Until recently, most ontology matching mechanisms developed so far have taken a classical functional approach to the semantic heterogeneity problem, in which ontology matching is seen as a process taking two or more ontologies as input and producing a semantic alignment of ontological entities as output [3]. Furthermore, matching often has been carried out at design-time, before integrating knowledge-based systems or making them interoperate. This might have been successful for clearly delimited and stable domains and for closed distributed systems, but it is untenable and even undesirable for the kind of applications that are currently deployed in open systems. Multi-agent communication, peer-to-peer information sharing, and webservice composition are all of a decentralised, dynamic, and open-ended nature, and they require ontology matching to be locally performed during run-time. In addition, in many situations peer ontologies are not even open for inspection (e.g., when they are based on commercially confidential information). Certainly, there exist efforts to efficiently match ontological entities at run-time, taking only those ontology fragment that are necessary for the task at hand [10, 13, 9, 8]. Nevertheless, the techniques used by these systems to establish the semantic relationships between ontological entities--even though applied at run-time--still exploit a priori defined concept taxonomies as they are represented in the graph-based structures of the ontologies to be matched, use previously existing external sources such as thesauri (e.g., WordNet) and upper-level ontologies (e.g., CyC or SUMO), or resort to additional background knowledge repositories or shared instances. We claim that semantic alignment of ontological terminology is ultimately relative to the particular situation in which the alignment is carried out, and that this situation should be made explicit and brought into the alignment mechanism. Even two agents with identical conceptualisation capabilities, and using exactly the same vocabulary to specify their respective conceptualisations may fail to interoperate in a concrete situation because of their differing perception of the domain. Imagine a situation in which two agents are facing each other in front of a checker board. Agent A1 may conceptualise a figure on the board as situated on the left margin of the board, while agent A2 may conceptualise the same figure as situated on the right. Although the conceptualisation of ` left' and ` right' is done in exactly the same manner by both agents, and even if both use the terms left and right in their communication, they still will need to align their respective vocabularies if they want to successfully communicate to each other actions that change the position of figures on the checker board. Their semantic alignment, however, will only be valid in the scope of their interaction within this particular situation or environment. The same agents situated differently may produce a different alignment. This scenario is reminiscent to those in which a group of distributed agents adapt to form an ontology and a shared lexicon in an emergent, bottom-up manner, with only local interactions and no central control authority [12]. This sort of self-organised emergence of shared meaning is namely ultimately grounded on the physical interaction of agents with the environment. In this paper, however, we address the case in which agents are already endowed with a top-down engineered ontology (it can even be the same one), which they do not adapt or refine, but for which they want to find the semantic relationships with separate ontologies of other agents on the grounds of their communication within a specific situation. In particular, we provide a formal model that formalises situated semantic alignment as a sequence of information-channel refinements in the sense of Barwise and Seligman's theory of information flow [1]. This theory is particularly useful for our endeavour because it models the flow of information occurring in distributed systems due to the particular situations--or tokens--that carry information. Analogously, the semantic alignment that will allow information to flow ultimately will be carried by the particular situation agents are acting in. We shall therefore consider a scenario with two or more agents situated in an environment. Each agent will have its own viewpoint of the environment so that, if the environment is in a concrete state, both agents may have different perceptions of this state. Because of these differences there may be a mismatch in the meaning of the syntactic entities by which agents describe their perceptions (and which constitute the agents' respective ontologies). We state that these syntactic entities can be related according to the intrinsic semantics provided by the existing relationship between the agents' viewpoint of the environment. The existence of this relationship is precisely justified by the fact that the agents are situated and observe the same environment. In Section 2 we describe our formal model for Situated Semantic Alignment (SSA). First, in Section 2.1 we associate a channel to the scenario under consideration and show how the distributed logic generated by this channel provides the logical relationships between the agents' viewpoints of the environment. Second, in Section 2.2 we present a method by which agents obtain approximations of this distributed logic. These approximations gradually become more reliable as the method is applied. In Section 3 we report on an application of our method. Conclusions and further work are analyzed in Section 4. Finally, an appendix summarizes the terms and theorems of Channel theory used along the paper. We do not assume any knowledge of Channel Theory; we restate basic definitions and theorems in the appendix, but any detailed exposition of the theory is outside the scope of this paper. 2. A FORMAL MODEL FOR SSA 2.1 The Logic of SSA Consider a scenario with two agents A1 and A2 situated in an environment E (the generalization to any numerable set of agents is straightforward). We associate a numerable set S of states to E and, at any given instant, we suppose E to be in one of these states. We further assume that each agent is able to observe the environment and has its own perception of it. This ability is faithfully captured by a surjective function seei: S--+ Pi, where i E 11, 21, and typically see1 and see2 are different. According to Channel Theory, information is only viable where there is a systematic way of classifying some range of things as being this way or that, in other words, where there is a classification (see appendix A). So in order to be within the framework of Channel Theory, we must associate classifications to the components of our system. For each i E 11, 21, we consider a classification Ai that models Ai's viewpoint of E. First, tok (Ai) is composed of Ai's perceptions of E states, that is, tok (Ai) = Pi. Second, typ (Ai) contains the syntactic entities by which Ai describes its perceptions, the ones constituting the ontology of Ai. Finally, = Ai synthesizes how Ai relates its perceptions with these syntactic entities. Now, with the aim of associating environment E with a classification E we choose the power classification of S as E, which is the classification whose set of types is equal to 2S, whose tokens are the elements of S, and for which a token e is of type ε if e E ε. The reason for taking the power classification is because there are no syntactic entities that may play the role of types for E since, in general, there is no global conceptualisation of the environment. However, the set of types of the power classification includes all possible token configurations potentially described by types. Thus tok (E) = S, typ (E) = 2S and e = E ε if and only if e E ε. The notion of channel (see appendix A) is fundamental in Barwise and Seligman's theory. The information flow among the components of a distributed system is modelled in terms of a channel and the relationships among these components are expressed via infomorphisms (see appendix A) which provide a way of moving information between them. The information flow of the scenario under consideration is accurately described by channel E = 1fi: Ai--+ Eb ∈ {1,2} defined as follows: • ˆfi (α) = 1e E tok (E) I seei (e) = Ai α1 for each α E typ (Ai) • ˇfi (e) = seei (e) for each e E tok (E) where i E 11, 21. Definition of ˇfi seems natural while ˆfi is defined in such a way that the fundamental property of the infomorphisms is fulfilled: Figure 1: Channel E E explains the information flow of our scenario by virtue of agents A1 and A2 being situated and perceiving the same environment E. We want to obtain meaningful relations among agents' syntactic entities, that is, agents' types. We state that meaningfulness must be in accord with E. The sum operation (see appendix A) gives us a way of putting the two agents' classifications of channel E together into a single classification, namely A1 + A2, and also the two infomorphisms together into a single infomorphism, f1 + f2: A1 + A2--+ E. A1 + A2 assembles agents' classifications in a very coarse way. tok (A1 + A2) is the cartesian product of tok (A1) and tok (A2), that is, tok (A1 + A2) = {(p1, p2) | pi ∈ Pi}, so a token of A1 + A2 is a pair of agents' perceptions with no restrictions. typ (A1 + A2) is the disjoint union of typ (A1) and typ (A2), and (p1, p2) is of type (i, α) if pi is of type α. We attach importance to take the disjoint union because A1 and A2 could use identical types with the purpose of describing their respective perceptions of E. Classification A1 + A2 seems to be the natural place in which to search for relations among agents' types. Now, Channel Theory provides a way to make all these relations explicit in a logical fashion by means of theories and local logics (see appendix A). The theory generated by the sum classification, Th (A1 + A2), and hence its logic generated, Log (A1 + A2), involve all those constraints among agents' types valid according to A1 + A2. Notice however that these constraints are obvious. As we stated above, meaningfulness must be in accord with channel E. Classifications A1 + A2 and E are connected via the sum infomorphism, f = f1 + f2, where: • ˆf ((i, α)) = ˆfi (α) = {e ∈ tok (E) | seei (e) | = Ai α} for each (i, α) ∈ typ (A1 + A2) • ˇf (e) = (ˇf1 (e), ˇf2 (e)) = (see1 (e), see2 (e)) for each e ∈ tok (E) Meaningful constraints among agents' types are in accord with channel E because they are computed making use of f as we expound below. As important as the notion of channel is the concept of distributed logic (see appendix A). Given a channel C and a logic 2 on its core, DLogC (2) represents the reasoning about relations among the components of C justified by 2. If 2 = Log (C), the distributed logic, we denoted by Log (C), captures in a logical fashion the information flow inherent in the channel. In our case, Log (E) explains the relationship between the agents' viewpoints of the environment in a logical fashion. On the one hand, constraints of Th (Log (E)) are defined by: where Γ, Δ ⊆ typ (A1 + A2). On the other hand, the set of normal tokens, NLog (E), is equal to the range of function ˇf: Therefore, a normal token is a pair of agents' perceptions that are restricted by coming from the same environment state (unlike A1 + A2 tokens). All constraints of Th (Log (E)) are satisfied by all normal tokens (because of being a logic). In this particular case, this condition is also sufficient (the proof is straightforward); as alternative to (1) we have: where Γ, Δ ⊆ typ (A1 + A2). Log (E) is the logic of SSA. Th (Log (E)) comprises the most meaningful constraints among agents' types in accord with channel E. In other words, the logic of SSA contains and also justifies the most meaningful relations among those syntactic entities that agents use in order to describe their own environment perceptions. necessarily sound because although Log (E) is sound, fˇ is Log (E) is complete since Log (E) is complete but it is not not surjective in general (see appendix B). If Log (E) is also sound then Log (E) = Log (A1 + A2) (see appendix B). That means there is no significant relation between agents' points of view of the environment according to E. It is just the fact that Log (E) is unsound what allows a significant relation between the agents' viewpoints. This relation is expressed at the type level in terms of constraints by Th (Log (E)) and at the token level by NLog (E). 2.2 Approaching the logic of SSA through communication We have dubbed Log (E) the logic of SSA. Th (Log (E)) comprehends the most meaningful constraints among agents' types according to E. The problem is that neither agent can make use of this theory because they do not know E completely. In this section, we present a method by which agents obtain approximations to Th (Log (E)). We also prove these approximations gradually become more reliable as the method is applied. Agents can obtain approximations to Th (Log (E)) through communication. A1 and A2 communicate by exchanging information about their perceptions of environment states. This information is expressed in terms of their own classification relations. Specifically, if E is in a concrete state e, we assume that agents can convey to each other which types are satisfied by their respective perceptions of e and which are not. This exchange generates a channel C = {fi: Ai--+ 1280 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) C} i ∈ {1,2} and Th (Log (C)) contains the constraints among agents' types justified by the fact that agents have observed e. Now, if E turns to another state e ~ and agents proceed as before, another channel C ~ = {fi ~: Ai → C ~} i ∈ {1,2} gives account of the new situation considering also the previous information. Th (Log (C ~)) comprises the constraints among agents' types justified by the fact that agents have observed e and e ~. The significant point is that C ~ is a refinement of C (see appendix A). Theorem 2.1 below ensures that the refined channel involves more reliable information. The communication supposedly ends when agents have observed all the environment states. Again this situation can be modeled by a channel, call it C ∗ = {fi ∗: Ai → C ∗} i ∈ {1,2}. Theorem 2.2 states that Th (Log (C ∗)) = Th (Log (E)). Theorem 2.1 and Theorem 2.2 assure that applying the method agents can obtain approximations to Th (Log (E)) gradually more reliable. THEOREM 2.1. Let C = {fi: Ai → C} i ∈ {1,2} and C ~ = {fi ~: Ai → C ~} i ∈ {1,2} be two channels. If C ~ is a refinement of C then: 1. Th (Log (C ~)) ⊆ Th (Log (C)) 2. NLog (C,) ⊇ NLog (C) PROOF. Since C ~ is a refinement of C then there exists a refinement infomorphism r from C ~ to C; so fi = r ◦ fi ~. Let A = def A1 + A2, f = def f1 + f2 and f ~ = def f1 ~ + f ~ 2. 1. Let Γ and Δ be subsets of typ (A) and assume that Γ ~ Log (C,) Δ, which means ˆf ~ [Γ] ~ c, ˆf ~ [Δ]. We have ˆf [Γ] ~ c ˆf [Δ]. We proceed by reductio ad absurdum. Suppose c ∈ tok (C) does not satisfy the sequent ~ ˆf [Γ], ˆf [Δ]. Then c | = c ˆf (γ) for all γ ∈ Γ and c | = c ˆf (δ) for all δ ∈ Δ. Let us choose an arbitrary γ ∈ Γ. We have that Consequently, ˇr (c) | = c, ˆf ~ (γ) for all γ ∈ Γ. Since ˆf ~ [Δ] then there exists δ ∗ ∈ Δ such that ˆf ~ (δ ∗). A sequence of equivalences similar to the above one justifies c | = c ˆf (δ ∗), contradicting that c is a counterexample to ~ ˆf [Γ], ˆf [Δ]. Hence Γ ~ Log (C) Δ as we wanted to prove. 2. Let ~ a1, a2 ∈ tok (A) and assume ~ a1, a2 ∈ NLog (C). Therefore, there exists c token in C such that ~ a1, a2 = ˇf (c). Then we have ai = ˇfi (c) = ˇfi ~ ◦ ˇr (c) = ˇfi ~ (ˇr (c)), for i ∈ {1, 2}. Hence ~ a1, a2 = ˇf ~ (ˇr (c)) and ~ a1, a2 ∈ NLog (C,). Consequently, NLog (C,) ⊇ NLog (C) which concludes the proof. REMARK 2.1. Theorem 2.1 asserts that the more refined channel gives more reliable information. Even though its theory has less constraints, it has more normal tokens to which they apply. In the remainder of the section, we explicitly describe the process of communication and we conclude with the proof of Theorem 2.2. Let us assume that typ (Ai) is finite for i ∈ {1, 2} and S is infinite numerable, though the finite case can be treated in a similar form. We also choose an infinite numerable set of symbols {cn | n ∈ N} 1. We omit informorphisms superscripts when no confusion arises. Types are usually denoted by greek letters and tokens ˆf (α) and Agents communication starts from the observation of E. Let us suppose that E is in state e1 ∈ S = tok (E). A1's perception of e1 is f1 (e1) and A2's perception of e1 is f2 (e1). We take for granted that A1 can communicate A2 those types that are and are not satisfied by f1 (e1) according to its classification A1. So can A2 do. Since both typ (A1) and typ (A2) are finite, this process eventually finishes. After this communication a channel C1 = {fi1: Ai → C1} i = 1,2 arises (see Figure 2). Figure 2: The first communication stage On the one hand, C1 is defined by: • tok (C1) = {c1} • typ (C1) = typ (A1 + A2) • c1 | = cl ~ i, α if fi (e1) | = Ai α (for every ~ i, α ∈ typ (A1 + A2)) On the other hand, fi1, with i ∈ {1, 2}, is defined by: Log (C1) represents the reasoning about the first stage of communication. It is easy to prove that Th (Log (C1)) = Th (C1). The significant point is that both agents know C1 as the result of the communication. Hence they can compute separately theory Th (C1) = ~ typ (C1), ~ cl which contains the constraints among agents' types justified by the fact that agents have observed e1. Now, let us assume that E turns to a new state e2. Agents can proceed as before, exchanging this time information about their perceptions of e2. Another channel C2 = {fi2: Ai → C2} i ∈ {1,2} comes up. We define C2 so as to take also into account the information provided by the previous stage of communication. On the one hand, C2 is defined by: • tok (C2) = {c1, c2} 1We write these symbols with superindices because we limit the use of subindices for what concerns to agents. Note this set is chosen with the same cardinality of S. Log (C2) represents the reasoning about the former and the later communication stages. Th (Log (C2)) is equal to Th (C2) = ~ typ (C2), ~ C2 ~, then it contains the constraints among agents' types justified by the fact that agents have observed e1 and e2. A1 and A2 knows C2 so they can use these constraints. The key point is that channel C2 is a refinement of C1. It is easy to check that f 1 defined as the identity function on types and the inclusion function on tokens is a refinement infomorphism (see at the bottom of Figure 3). By Theorem 2.1, C2 constraints are more reliable than C1 constraints. In the general situation, once the states e1, e2,..., en − 1 (n ≥ 2) have been observed and a new state en appears, channel Cn = {fin: Ai → Cn} i ∈ {1,2} informs about agents communication up to that moment. Cn definition is similar to the previous ones and analogous remarks can be made (see at the top of Figure 3). Theory Th (Log (Cn)) = Th (Cn) = ~ typ (Cn), ~ Cn ~ contains the constraints among agents' types justified by the fact that agents have observed e1, e2,..., en. Figure 3: Agents communication agents to decide when to stop communicating should a good enough approximation have been reached for the purposes of their respective tasks. But the study of possible termination criteria is outside the scope of this paper and left for future work. From a theoretical point of view, however, we can consider the channel C ∗ = {fi ∗: Ai → C ∗} i ∈ {1,2} which informs of the end of the communication after observing all environment states. On the one hand, C ∗ is defined by: • tok (C ∗) = {cn | n ∈ N} • typ (C ∗) = typ (A1 + A2) • cn | = C ∗ ~ i, α ~ if fi (en) | = Ai α (for n ∈ N and ~ i, α ~ ∈ typ (A1 + A2)) On the other hand, fi ∗, with i ∈ {1, 2}, is defined by: • fi ∗ (α) = ~ i, α ~ (for α ∈ typ (Ai)) • fi ∗ (cn) = fi (en) (for n ∈ N) Theorem below constitutes the cornerstone of the model exposed in this paper. It ensures, together with Theorem 2.1, that at each communication stage agents obtain a theory that approximates more closely to the theory generated by the logic of SSA. THEOREM 2.2. The following statements hold: 1. For all n ∈ N, C ∗ is a refinement of Cn. 2. Th (Log (E)) = Th (C ∗) = Th (Log (C ∗)). Remember we have assumed that S is infinite numerable. It is therefore unpractical to let communication finish when all environment states have been observed by A1 and A2. At that point, the family of channels {Cn} n ∈ N would inform of all the communication stages. It is therefore up to the 1282 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. AN EXAMPLE In the previous section we have described in great detail our formal model for SSA. However, we have not tackled the practical aspect of the model yet. In this section, we give a brushstroke of the pragmatic view of our approach. We study a very simple example and explain how agents can use those approximations of the logic of SSA they can obtain through communication. Let us reflect on a system consisting of robots located in a two-dimensional grid looking for packages with the aim of moving them to a certain destination (Figure 4). Robots can carry only one package at a time and they cannot move through a package. Figure 4: The scenario Robots have a partial view of the domain and there exist two kinds of robots according to the visual field they have. Some robots are capable of observing the eight adjoining squares but others just observe the three squares they have in front (see Figure 5). We call them URDL (shortened form of Up-Right-Down-Left) and LCR (abbreviation for Left-Center-Right) robots respectively. Describing the environment states as well as the robots' perception functions is rather tedious and even unnecessary. We assume the reader has all those descriptions in mind. All robots in the system must be able to solve package distribution problems cooperatively by communicating their intentions to each other. In order to communicate, agents send messages using some ontology. In our scenario, there coexist two ontologies, the UDRL and LCR ontologies. Both of them are very simple and are just confined to describe what robots observe. Figure 5: Robots field of vision When a robot carrying a package finds another package obstructing its way, it can either go around it or, if there is another robot in its visual field, ask it for assistance. Let us suppose two URDL robots are in a situation like the one depicted in Figure 6. Robot1 (the one carrying a package) decides to ask Robot2 for assistance and sends a request. This request is written below as a KQML message and it should be interpreted intuitively as: Robot2, pick up the package located in gay "Up" square, knowing that you are located in gay "Up-Right" square. Figure 6: Robot assistance Robot2 understands the content of the request and it can use a rule represented by the following constraint: The above constraint should be interpreted intuitively as: if Robot2 is situated in Robot1's "Up-Right" square, Robot1 is situated in Robot2's "Up-Left" square and a package is located in Robot1's "Up" square, then a package is located in Robot2's "Up" square. Now, problems arise when a LCR robot and a URDL robot try to interoperate. See Figure 7. Robot1 sends a request of the form: Robot2 does not understand the content of the request but they decide to begin a process of alignment - corresponding with a channel C1. Once finished, Robot2 searches in Th (C1) for constraints similar to the expected one, that is, those of the form: where λ E {U, R, D, L, UR, DR, DL, UL}. From these, only the following constraints are plausible according to C1: Figure 7: Ontology mismatch If subsequently both robots adopting the same roles take part in a situation like the one depicted in Figure 8, a new process of alignment - corresponding with a channel C2 - takes place. C2 also considers the previous information and hence refines C1. The only constraint from the above ones that remains plausible according to C2 is: Notice that this constraint is an element of the theory of the distributed logic. Agents communicate in order to cooperate successfully and success is guaranteed using constrains of the distributed logic. Figure 8: Refinement 4. CONCLUSIONS AND FURTHER WORK In this paper we have exposed a formal model of semantic alignment as a sequence of information-channel refinements that are relative to the particular states of the environment in which two agents communicate and align their respective conceptualisations of these states. Before us, Kent [6] and Kalfoglou and Schorlemmer [4, 10] have applied Channel Theory to formalise semantic alignment using also Barwise and Seligman's insight to focus on tokens as the enablers of information flow. Their approach to semantic alignment, however, like most ontology matching mechanisms developed to date (regardless of whether they follow a functional, design-time-based approach, or an interaction-based, runtime-based approach), still defines semantic alignment in terms of a priori design decisions such as the concept taxonomy of the ontologies or the external sources brought into the alignment process. Instead the model we have presented in this paper makes explicit the particular states of the environment in which agents are situated and are attempting to gradually align their ontological entities. In the future, our effort will focus on the practical side of the situated semantic alignment problem. We plan to further refine the model presented here (e.g., to include pragmatic issues such as termination criteria for the alignment process) and to devise concrete ontology negotiation protocols based on this model that agents may be able to enact. The formal model exposed in this paper will constitute a solid base of future practical results. Acknowledgements This work is supported under the UPIC project, sponsored by Spain's Ministry of Education and Science under grant number TIN2004-07461-C02 - 02 and also under the OpenKnowledge Specific Targeted Research Project (STREP), sponsored by the European Commission under contract number FP6-027253. Marco Schorlemmer is supported by a Ram ´ on y Cajal Research Fellowship from Spain's Ministry of Education and Science, partially funded by the European Social Fund. 5. REFERENCES APPENDIX A. CHANNEL THEORY TERMS Classification: is a tuple A = ~ tok (A), typ (A), | = A ~ where tok (A) is a set of tokens, typ (A) is a set of types and | = A is a binary relation between tok (A) and typ (A). If a | = A α then a is said to be of type α. Infomorphism: f: A → B from classifications A to B is a contravariant pair of functions f = ~ ˆf, ˇf ~, where fˆ: typ (A) → typ (B) and fˇ: tok (B) → tok (A), satisfying the following fundamental property: ˇf (b) | = A α iff b | = B for each token b ∈ tok (B) and each type α ∈ typ (A). Channel: consists of two infomorphisms C = {fi: Ai → C} i ∈ {1,2} with a common codomain C, called the core of C. C tokens are called connections and a connection c is said to connect tokens ˇf1 (c) and ˇf2 (c).2 Sum: given classifications A and B, the sum of A and B, denoted by A + B, is the classification with tok (A + B) = tok (A) × tok (B) = {~ a, b ~ | a ∈ tok (A) and b ∈ tok (B)}, typ (A + B) = typ (A) ~ typ (B) = {~ i, γ ~ | i = 1 and γ ∈ typ (A) or i = 2 and γ ∈ typ (B)} and relation | = A+B defined by: ~ a, b ~ | = A+B ~ 1, α ~ if a | = A α ~ a, b ~ | = A+B ~ 2, β ~ if b | = B β Given infomorphisms f: A → C and g: B → C, the sum f + g: A + B → C is defined on types by (f ˆ + g) (~ 1, α ~) = ˆf (α) and (f ˆ + g) (~ 2, β ~) = ˆg (β), and on tokens by (f ˇ + g) (c) = ~ ˇf (c), ˇg (c) ~. Theory: given a set Σ, a sequent of Σ is a pair ~ Γ, Δ ~ of subsets of Σ. A binary relation ~ between subsets of Σ is called a consequence relation on Σ. A theory is a pair T = ~ Σ, ~ ~ where ~ is a consequence relation on Σ. A sequent ~ Γ, Δ ~ of Σ for which Γ ~ Δ is called a constraint of the theory T. T is regular if it satisfies: 1. Identity: α ~ α 2. Weakening: if Γ ~ Δ, then Γ, Γ ~ ~ Δ, Δ ~ 2In fact, this is the definition of a binary channel. A channel can be defined with an arbitrary index set. 3. Global Cut: if Γ, Il0 ~ Δ, Il1 for each partition ~ Il0, Il1 ~ of Il (i.e., Il0 ∪ Il1 = Il and Il0 ∩ Il1 = ∅), then Γ ~ Δ for all α ∈ Σ and all Γ, Γ ~, Δ, Δ ~, Il ⊆ Σ .3 Theory generated by a classification: let A be a classification. A token a ∈ tok (A) satisfies a sequent ~ Γ, Δ ~ of typ (A) provided that if a is of every type in Γ then it is of some type in Δ. The theory generated by A, denoted by Th (A), is the theory ~ typ (A), ~ A ~ where Γ ~ A Δ if every token in A satisfies ~ Γ, Δ ~. Local logic: is a tuple, E = ~ tok (, E), typ (, E), | = #, ~ #, N #~ where: 1. ~ tok (, E), typ (, E), | = #~ is a classification denoted by Cla (, E), 2. ~ typ (, E), ~ #~ is a regular theory denoted by Th (, E), 3. N #is a subset of tok (, E), called the normal tokens of, E, which satisfy all constraints of Th (, E). A local logic, E is sound if every token in Cla (, E) is normal, that is, N #= tok (, E). , E is complete if every sequent of typ (, E) satisfied by every normal token is a constraint of Th (, E). Local logic generated by a classification: given a classification A, the local logic generated by A, written Log (A), is the local logic on A (i.e., Cla (Log (A)) = A), with Th (Log (A)) = Th (A) and such that all its tokens are normal, i.e., NLog (A) = tok (A). Inverse image: given an infomorphism f: A → B and a local logic, E on B, the inverse image of, E under f, denoted f − 1 [, E], is the local logic on A such that ˆf [Δ] and Nf − 1 [#] = ˇf [N #] = {a ∈ tok (A) | a = ˇf (b) for some b ∈ N #}. Distributed logic: let C = {fi: Ai → C} i ∈ {1,2} be a channel and, E a local logic on its core C, the distributed logic of C generated by, E, written DLogC (, E), is the inverse image of, E under the sum f1 + f2. Refinement: let C = {fi: Ai → C} i ∈ {1,2} and C ~ = {fi ~: Ai → C ~} i ∈ {1,2} be two channels with the same component classifications A1 and A2. A refinement infomorphism from C ~ to C is an infomorphism r: C ~ → C such that for each i ∈ {1, 2}, fi = r ◦ fi ~ (i.e., ˆfi = rˆ ◦ ˆfi ~ and ˇfi = ˇfi ~ ◦ ˇr). Channel C ~ is a refinement of C if there exists a refinement infomorphism r from C ~ to C. B. CHANNEL THEORY THEOREMS The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1285
A Formal Model for Situated Semantic Alignment ABSTRACT Ontology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems. Most ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources. In this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in. It hence makes the situation in which the alignment occurs explicit in the model. We resort to Channel Theory to carry out the formalisation. 1. INTRODUCTION An ontology is commonly defined as a specification of the conceptualisation of a particular domain. It fixes the vocabulary used by knowledge engineers to denote concepts and their relations, and it constrains the interpretation of this vocabulary to the meaning originally intended by knowledge engineers. As such, ontologies have been widely adopted as a key technology that may favour knowledge sharing in distributed environments, such as multi-agent systems, federated databases, or the Semantic Web. But the proliferation of many diverse ontologies caused by different conceptualisations of even the same domain--and their subsequent specification using varying terminology--has highlighted the need of ontology matching techniques that are capable of computing semantic relationships between entities of separately engineered ontologies. [5, 11] Until recently, most ontology matching mechanisms developed so far have taken a classical functional approach to the semantic heterogeneity problem, in which ontology matching is seen as a process taking two or more ontologies as input and producing a semantic alignment of ontological entities as output [3]. Furthermore, matching often has been carried out at design-time, before integrating knowledge-based systems or making them interoperate. This might have been successful for clearly delimited and stable domains and for closed distributed systems, but it is untenable and even undesirable for the kind of applications that are currently deployed in open systems. Multi-agent communication, peer-to-peer information sharing, and webservice composition are all of a decentralised, dynamic, and open-ended nature, and they require ontology matching to be locally performed during run-time. In addition, in many situations peer ontologies are not even open for inspection (e.g., when they are based on commercially confidential information). Certainly, there exist efforts to efficiently match ontological entities at run-time, taking only those ontology fragment that are necessary for the task at hand [10, 13, 9, 8]. Nevertheless, the techniques used by these systems to establish the semantic relationships between ontological entities--even though applied at run-time--still exploit a priori defined concept taxonomies as they are represented in the graph-based structures of the ontologies to be matched, use previously existing external sources such as thesauri (e.g., WordNet) and upper-level ontologies (e.g., CyC or SUMO), or resort to additional background knowledge repositories or shared instances. We claim that semantic alignment of ontological terminology is ultimately relative to the particular situation in which the alignment is carried out, and that this situation should be made explicit and brought into the alignment mechanism. Even two agents with identical conceptualisation capabilities, and using exactly the same vocabulary to specify their respective conceptualisations may fail to interoperate in a concrete situation because of their differing perception of the domain. Imagine a situation in which two agents are facing each other in front of a checker board. Agent A1 may conceptualise a figure on the board as situated on the left margin of the board, while agent A2 may conceptualise the same figure as situated on the right. Although the conceptualisation of ` left' and ` right' is done in exactly the same manner by both agents, and even if both use the terms left and right in their communication, they still will need to align their respective vocabularies if they want to successfully communicate to each other actions that change the position of figures on the checker board. Their semantic alignment, however, will only be valid in the scope of their interaction within this particular situation or environment. The same agents situated differently may produce a different alignment. This scenario is reminiscent to those in which a group of distributed agents adapt to form an ontology and a shared lexicon in an emergent, bottom-up manner, with only local interactions and no central control authority [12]. This sort of self-organised emergence of shared meaning is namely ultimately grounded on the physical interaction of agents with the environment. In this paper, however, we address the case in which agents are already endowed with a top-down engineered ontology (it can even be the same one), which they do not adapt or refine, but for which they want to find the semantic relationships with separate ontologies of other agents on the grounds of their communication within a specific situation. In particular, we provide a formal model that formalises situated semantic alignment as a sequence of information-channel refinements in the sense of Barwise and Seligman's theory of information flow [1]. This theory is particularly useful for our endeavour because it models the flow of information occurring in distributed systems due to the particular situations--or tokens--that carry information. Analogously, the semantic alignment that will allow information to flow ultimately will be carried by the particular situation agents are acting in. We shall therefore consider a scenario with two or more agents situated in an environment. Each agent will have its own viewpoint of the environment so that, if the environment is in a concrete state, both agents may have different perceptions of this state. Because of these differences there may be a mismatch in the meaning of the syntactic entities by which agents describe their perceptions (and which constitute the agents' respective ontologies). We state that these syntactic entities can be related according to the intrinsic semantics provided by the existing relationship between the agents' viewpoint of the environment. The existence of this relationship is precisely justified by the fact that the agents are situated and observe the same environment. In Section 2 we describe our formal model for Situated Semantic Alignment (SSA). First, in Section 2.1 we associate a channel to the scenario under consideration and show how the distributed logic generated by this channel provides the logical relationships between the agents' viewpoints of the environment. Second, in Section 2.2 we present a method by which agents obtain approximations of this distributed logic. These approximations gradually become more reliable as the method is applied. In Section 3 we report on an application of our method. Conclusions and further work are analyzed in Section 4. Finally, an appendix summarizes the terms and theorems of Channel theory used along the paper. We do not assume any knowledge of Channel Theory; we restate basic definitions and theorems in the appendix, but any detailed exposition of the theory is outside the scope of this paper. 2. A FORMAL MODEL FOR SSA 2.1 The Logic of SSA 2.2 Approaching the logic of SSA through communication 1280 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1282 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. AN EXAMPLE 4. CONCLUSIONS AND FURTHER WORK Acknowledgements 5. REFERENCES APPENDIX A. CHANNEL THEORY TERMS B. CHANNEL THEORY THEOREMS The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1285
A Formal Model for Situated Semantic Alignment ABSTRACT Ontology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems. Most ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources. In this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in. It hence makes the situation in which the alignment occurs explicit in the model. We resort to Channel Theory to carry out the formalisation. 1. INTRODUCTION An ontology is commonly defined as a specification of the conceptualisation of a particular domain. It fixes the vocabulary used by knowledge engineers to denote concepts and their relations, and it constrains the interpretation of this vocabulary to the meaning originally intended by knowledge engineers. As such, ontologies have been widely adopted as a key technology that may favour knowledge sharing in distributed environments, such as multi-agent systems, federated databases, or the Semantic Web. But the proliferation of many diverse ontologies caused by different conceptualisations of even the same domain--and their subsequent specification using varying terminology--has highlighted the need of ontology matching techniques that are capable of computing semantic relationships between entities of separately engineered ontologies. Furthermore, matching often has been carried out at design-time, before integrating knowledge-based systems or making them interoperate. In addition, in many situations peer ontologies are not even open for inspection (e.g., when they are based on commercially confidential information). Certainly, there exist efforts to efficiently match ontological entities at run-time, taking only those ontology fragment that are necessary for the task at hand [10, 13, 9, 8]. We claim that semantic alignment of ontological terminology is ultimately relative to the particular situation in which the alignment is carried out, and that this situation should be made explicit and brought into the alignment mechanism. Even two agents with identical conceptualisation capabilities, and using exactly the same vocabulary to specify their respective conceptualisations may fail to interoperate in a concrete situation because of their differing perception of the domain. Imagine a situation in which two agents are facing each other in front of a checker board. Agent A1 may conceptualise a figure on the board as situated on the left margin of the board, while agent A2 may conceptualise the same figure as situated on the right. Their semantic alignment, however, will only be valid in the scope of their interaction within this particular situation or environment. The same agents situated differently may produce a different alignment. This scenario is reminiscent to those in which a group of distributed agents adapt to form an ontology and a shared lexicon in an emergent, bottom-up manner, with only local interactions and no central control authority [12]. This sort of self-organised emergence of shared meaning is namely ultimately grounded on the physical interaction of agents with the environment. In this paper, however, we address the case in which agents are already endowed with a top-down engineered ontology (it can even be the same one), which they do not adapt or refine, but for which they want to find the semantic relationships with separate ontologies of other agents on the grounds of their communication within a specific situation. In particular, we provide a formal model that formalises situated semantic alignment as a sequence of information-channel refinements in the sense of Barwise and Seligman's theory of information flow [1]. This theory is particularly useful for our endeavour because it models the flow of information occurring in distributed systems due to the particular situations--or tokens--that carry information. Analogously, the semantic alignment that will allow information to flow ultimately will be carried by the particular situation agents are acting in. We shall therefore consider a scenario with two or more agents situated in an environment. Each agent will have its own viewpoint of the environment so that, if the environment is in a concrete state, both agents may have different perceptions of this state. Because of these differences there may be a mismatch in the meaning of the syntactic entities by which agents describe their perceptions (and which constitute the agents' respective ontologies). We state that these syntactic entities can be related according to the intrinsic semantics provided by the existing relationship between the agents' viewpoint of the environment. The existence of this relationship is precisely justified by the fact that the agents are situated and observe the same environment. In Section 2 we describe our formal model for Situated Semantic Alignment (SSA). First, in Section 2.1 we associate a channel to the scenario under consideration and show how the distributed logic generated by this channel provides the logical relationships between the agents' viewpoints of the environment. Second, in Section 2.2 we present a method by which agents obtain approximations of this distributed logic. These approximations gradually become more reliable as the method is applied. In Section 3 we report on an application of our method. Conclusions and further work are analyzed in Section 4. Finally, an appendix summarizes the terms and theorems of Channel theory used along the paper.
I-65
Graphical Models for Online Solutions to Interactive POMDPs
We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness.
[ "graphic model", "interact partial observ markov decis process", "interact dynam influenc diagram", "influenc diagram", "agent onlin", "sequenti decis-make", "partial observ multiag environ", "multiag environ", "nash equilibrium profil", "independ structur", "multi-agent influenc diagram", "influenc diagram network", "multiplex", "polici link", "depend link", "interact influenc diagram", "onlin sequenti decis-make", "dynam influenc diagram", "decis-make", "agent model" ]
[ "P", "P", "P", "P", "P", "U", "M", "M", "U", "M", "M", "M", "U", "M", "M", "R", "M", "M", "U", "R" ]
Graphical Models for Online Solutions to Interactive POMDPs Prashant Doshi Dept. of Computer Science University of Georgia Athens, GA 30602, USA pdoshi@cs.uga.edu Yifeng Zeng Dept. of Computer Science Aalborg University DK-9220 Aalborg, Denmark yfzeng@cs.aau.edu Qiongyu Chen Dept. of Computer Science National Univ. of Singapore 117543, Singapore chenqy@comp.nus.edu.sg ABSTRACT We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Theory 1. INTRODUCTION Interactive partially observable Markov decision processes (IPOMDPs) [9] provide a framework for sequential decision-making in partially observable multiagent environments. They generalize POMDPs [13] to multiagent settings by including the other agents'' computable models in the state space along with the states of the physical environment. The models encompass all information influencing the agents'' behaviors, including their preferences, capabilities, and beliefs, and are thus analogous to types in Bayesian games [11]. I-POMDPs adopt a subjective approach to understanding strategic behavior, rooted in a decision-theoretic framework that takes a decision-maker``s perspective in the interaction. In [15], Polich and Gmytrasiewicz introduced interactive dynamic influence diagrams (I-DIDs) as the computational representations of I-POMDPs. I-DIDs generalize DIDs [12], which may be viewed as computational counterparts of POMDPs, to multiagents settings in the same way that I-POMDPs generalize POMDPs. I-DIDs contribute to a growing line of work [19] that includes multi-agent influence diagrams (MAIDs) [14], and more recently, networks of influence diagrams (NIDs) [8]. These formalisms seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. MAIDs provide an alternative to normal and extensive game forms using a graphical formalism to represent games of imperfect information with a decision node for each agent``s actions and chance nodes capturing the agent``s private information. MAIDs objectively analyze the game, efficiently computing the Nash equilibrium profile by exploiting the independence structure. NIDs extend MAIDs to include agents'' uncertainty over the game being played and over models of the other agents. Each model is a MAID and the network of MAIDs is collapsed, bottom up, into a single MAID for computing the equilibrium of the game keeping in mind the different models of each agent. Graphical formalisms such as MAIDs and NIDs open up a promising area of research that aims to represent multiagent interactions more transparently. However, MAIDs provide an analysis of the game from an external viewpoint and the applicability of both is limited to static single play games. Matters are more complex when we consider interactions that are extended over time, where predictions about others'' future actions must be made using models that change as the agents act and observe. I-DIDs address this gap by allowing the representation of other agents'' models as the values of a special model node. Both, other agents'' models and the original agent``s beliefs over these models are updated over time using special-purpose implementations. In this paper, we improve on the previous preliminary representation of the I-DID shown in [15] by using the insight that the static I-ID is a type of NID. Thus, we may utilize NID-specific language constructs such as multiplexers to represent the model node, and subsequently the I-ID, more transparently. Furthermore, we clarify the semantics of the special purpose policy link introduced in the representation of I-DID by [15], and show that it could be replaced by traditional dependency links. In the previous representation of the I-DID, the update of the agent``s belief over the models of others as the agents act and receive observations was denoted using a special link called the model update link that connected the model nodes over time. We explicate the semantics of this link by showing how it can be implemented using the traditional dependency links between the chance nodes that constitute the model nodes. The net result is a representation of I-DID that is significantly more transparent, semantically clear, and capable of being implemented using the standard algorithms for solving DIDs. We show how IDIDs may be used to model an agent``s uncertainty over others'' models, that may themselves be I-DIDs. Solution to the I-DID is a policy that prescribes what the agent should do over time, given its beliefs over the physical state and others'' models. Analogous to DIDs, I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. 2. BACKGROUND: FINITELY NESTED IPOMDPS Interactive POMDPs generalize POMDPs to multiagent settings by including other agents'' models as part of the state space [9]. Since other agents may also reason about others, the interactive state space is strategically nested; it contains beliefs about other agents'' models and their beliefs about others. For simplicity of presentation we consider an agent, i, that is interacting with one other agent, j. A finitely nested I-POMDP of agent i with a strategy level l is defined as the tuple: I-POMDPi,l = ISi,l, A, Ti, Ωi, Oi, Ri where: • ISi,l denotes a set of interactive states defined as, ISi,l = S × Mj,l−1, where Mj,l−1 = {Θj,l−1 ∪ SMj}, for l ≥ 1, and ISi,0 = S, where S is the set of states of the physical environment. Θj,l−1 is the set of computable intentional models of agent j: θj,l−1 = bj,l−1, ˆθj where the frame, ˆθj = A, Ωj, Tj, Oj, Rj, OCj . Here, j is Bayes rational and OCj is j``s optimality criterion. SMj is the set of subintentional models of j. Simple examples of subintentional models include a no-information model [10] and a fictitious play model [6], both of which are history independent. We give a recursive bottom-up construction of the interactive state space below. ISi,0 = S, Θj,0 = { bj,0, ˆθj | bj,0 ∈ Δ(ISj,0)} ISi,1 = S × {Θj,0 ∪ SMj}, Θj,1 = { bj,1, ˆθj | bj,1 ∈ Δ(ISj,1)} . . . . . . ISi,l = S × {Θj,l−1 ∪ SMj}, Θj,l = { bj,l, ˆθj | bj,l ∈ Δ(ISj,l)} Similar formulations of nested spaces have appeared in [1, 3]. • A = Ai × Aj is the set of joint actions of all agents in the environment; • Ti : S ×A×S → [0, 1], describes the effect of the joint actions on the physical states of the environment; • Ωi is the set of observations of agent i; • Oi : S × A × Ωi → [0, 1] gives the likelihood of the observations given the physical state and joint action; • Ri : ISi × A → R describes agent i``s preferences over its interactive states. Usually only the physical states will matter. Agent i``s policy is the mapping, Ω∗ i → Δ(Ai), where Ω∗ i is the set of all observation histories of agent i. Since belief over the interactive states forms a sufficient statistic [9], the policy can also be represented as a mapping from the set of all beliefs of agent i to a distribution over its actions, Δ(ISi) → Δ(Ai). 2.1 Belief Update Analogous to POMDPs, an agent within the I-POMDP framework updates its belief as it acts and observes. However, there are two differences that complicate the belief update in multiagent settings when compared to single agent ones. First, since the state of the physical environment depends on the actions of both agents, i``s prediction of how the physical state changes has to be made based on its prediction of j``s actions. Second, changes in j``s models have to be included in i``s belief update. Specifically, if j is intentional then an update of j``s beliefs due to its action and observation has to be included. In other words, i has to update its belief based on its prediction of what j would observe and how j would update its belief. If j``s model is subintentional, then j``s probable observations are appended to the observation history contained in the model. Formally, we have: Pr(ist |at−1 i , bt−1 i,l ) = β ISt−1:mt−1 j =θt j bt−1 i,l (ist−1 ) × at−1 j Pr(at−1 j |θt−1 j,l−1)Oi(st , at−1 i , at−1 j , ot i) ×Ti(st−1 , at−1 i , at−1 j , st ) ot j Oj(st , at−1 i , at−1 j , ot j) ×τ(SEθt j (bt−1 j,l−1, at−1 j , ot j) − bt j,l−1) (1) where β is the normalizing constant, τ is 1 if its argument is 0 otherwise it is 0, Pr(at−1 j |θt−1 j,l−1) is the probability that at−1 j is Bayes rational for the agent described by model θt−1 j,l−1, and SE(·) is an abbreviation for the belief update. For a version of the belief update when j``s model is subintentional, see [9]. If agent j is also modeled as an I-POMDP, then i``s belief update invokes j``s belief update (via the term SEθt j ( bt−1 j,l−1 , at−1 j , ot j)), which in turn could invoke i``s belief update and so on. This recursion in belief nesting bottoms out at the 0th level. At this level, the belief update of the agent reduces to a POMDP belief update. 1 For illustrations of the belief update, additional details on I-POMDPs, and how they compare with other multiagent frameworks, see [9]. 2.2 Value Iteration Each belief state in a finitely nested I-POMDP has an associated value reflecting the maximum payoff the agent can expect in this belief state: Un ( bi,l, θi ) = max ai∈Ai is∈ISi,l ERi(is, ai)bi,l(is)+ γ oi∈Ωi Pr(oi|ai, bi,l)Un−1 ( SEθi (bi,l, ai, oi), θi ) (2) where, ERi(is, ai) = aj Ri(is, ai, aj)Pr(aj|mj,l−1) (since is = (s, mj,l−1)). Eq. 2 is a basis for value iteration in I-POMDPs. Agent i``s optimal action, a∗ i , for the case of finite horizon with discounting, is an element of the set of optimal actions for the belief state, OPT(θi), defined as: OPT( bi,l, θi ) = argmax ai∈Ai is∈ISi,l ERi(is, ai)bi,l(is) +γ oi∈Ωi Pr(oi|ai, bi,l)Un ( SEθi (bi,l, ai, oi), θi ) (3) 3. INTERACTIVEINFLUENCEDIAGRAMS A naive extension of influence diagrams (IDs) to settings populated by multiple agents is possible by treating other agents as automatons, represented using chance nodes. However, this approach assumes that the agents'' actions are controlled using a probability distribution that does not change over time. Interactive influence diagrams (I-IDs) adopt a more sophisticated approach by generalizing IDs to make them applicable to settings shared with other agents who may act and observe, and update their beliefs. 3.1 Syntax In addition to the usual chance, decision, and utility nodes, IIDs include a new type of node called the model node. We show a general level l I-ID in Fig. 1(a), where the model node (Mj,l−1) is denoted using a hexagon. We note that the probability distribution over the chance node, S, and the model node together represents agent i``s belief over its interactive states. In addition to the model 1 The 0th level model is a POMDP: Other agent``s actions are treated as exogenous events and folded into the T, O, and R functions. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 815 Figure 1: (a) A generic level l I-ID for agent i situated with one other agent j. The hexagon is the model node (Mj,l−1) whose structure we show in (b). Members of the model node are I-IDs themselves (m1 j,l−1, m2 j,l−1; diagrams not shown here for simplicity) whose decision nodes are mapped to the corresponding chance nodes (A1 j , A2 j ). Depending on the value of the node, Mod[Mj], the distribution of each of the chance nodes is assigned to the node Aj. (c) The transformed I-ID with the model node replaced by the chance nodes and the relationships between them. node, I-IDs differ from IDs by having a dashed link (called the policy link in [15]) between the model node and a chance node, Aj, that represents the distribution over the other agent``s actions given its model. In the absence of other agents, the model node and the chance node, Aj, vanish and I-IDs collapse into traditional IDs. The model node contains the alternative computational models ascribed by i to the other agent from the set, Θj,l−1 ∪ SMj, where Θj,l−1 and SMj were defined previously in Section 2. Thus, a model in the model node may itself be an I-ID or ID, and the recursion terminates when a model is an ID or subintentional. Because the model node contains the alternative models of the other agent as its values, its representation is not trivial. In particular, some of the models within the node are I-IDs that when solved generate the agent``s optimal policy in their decision nodes. Each decision node is mapped to the corresponding chance node, say A1 j , in the following way: if OPT is the set of optimal actions obtained by solving the I-ID (or ID), then Pr(aj ∈ A1 j ) = 1 |OP T | if aj ∈ OPT, 0 otherwise. Borrowing insights from previous work [8], we observe that the model node and the dashed policy link that connects it to the chance node, Aj, could be represented as shown in Fig. 1(b). The decision node of each level l − 1 I-ID is transformed into a chance node, as we mentioned previously, so that the actions with the largest value in the decision node are assigned uniform probabilities in the chance node while the rest are assigned zero probability. The different chance nodes (A1 j , A2 j ), one for each model, and additionally, the chance node labeled Mod[Mj] form the parents of the chance node, Aj. Thus, there are as many action nodes (A1 j , A2 j ) in Mj,l−1 as the number of models in the support of agent i``s beliefs. The conditional probability table of the chance node, Aj, is a multiplexer that assumes the distribution of each of the action nodes (A1 j , A2 j ) depending on the value of Mod[Mj]. The values of Mod[Mj] denote the different models of j. In other words, when Mod[Mj] has the value m1 j,l−1, the chance node Aj assumes the distribution of the node A1 j , and Aj assumes the distribution of A2 j when Mod[Mj] has the value m2 j,l−1. The distribution over the node, Mod[Mj], is the agent i``s belief over the models of j given a physical state. For more agents, we will have as many model nodes as there are agents. Notice that Fig. 1(b) clarifies the semantics of the policy link, and shows how it can be represented using the traditional dependency links. In Fig. 1(c), we show the transformed I-ID when the model node is replaced by the chance nodes and relationships between them. In contrast to the representation in [15], there are no special-purpose policy links, rather the I-ID is composed of only those types of nodes that are found in traditional IDs and dependency relationships between the nodes. This allows I-IDs to be represented and implemented using conventional application tools that target IDs. Note that we may view the level l I-ID as a NID. Specifically, each of the level l − 1 models within the model node are blocks in the NID (see Fig. 2). If the level l = 1, each block is a traditional ID, otherwise if l > 1, each block within the NID may itself be a NID. Note that within the I-IDs (or IDs) at each level, there is only a single decision node. Thus, our NID does not contain any MAIDs. Figure 2: A level l I-ID represented as a NID. The probabilities assigned to the blocks of the NID are i``s beliefs over j``s models conditioned on a physical state. 3.2 Solution The solution of an I-ID proceeds in a bottom-up manner, and is implemented recursively. We start by solving the level 0 models, which, if intentional, are traditional IDs. Their solutions provide probability distributions over the other agents'' actions, which are entered in the corresponding chance nodes found in the model node of the level 1 I-ID. The mapping from the level 0 models'' decision nodes to the chance nodes is carried out so that actions with the largest value in the decision node are assigned uniform probabilities in the chance node while the rest are assigned zero probability. Given the distributions over the actions within the different chance nodes (one for each model of the other agent), the level 1 I-ID is transformed as shown in Fig. 1(c). During the transformation, the conditional probability table (CPT) of the node, Aj, is populated such that the node assumes the distribution of each of the chance nodes depending on the value of the node, Mod[Mj]. As we mentioned previously, the values of the node Mod[Mj] denote the different models of the other agent, and its distribution is the agent i``s belief over the models of j conditioned on the physical state. The transformed level 1 I-ID is a traditional ID that may be solved us816 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) (a) (b) Figure 3: (a) A generic two time-slice level l I-DID for agent i in a setting with one other agent j. Notice the dotted model update link that denotes the update of the models of j and the distribution over the models over time. (b) The semantics of the model update link. ing the standard expected utility maximization method [18]. This procedure is carried out up to the level l I-ID whose solution gives the non-empty set of optimal actions that the agent should perform given its belief. Notice that analogous to IDs, I-IDs are suitable for online decision-making when the agent``s current belief is known. 4. INTERACTIVE DYNAMIC INFLUENCE DIAGRAMS Interactive dynamic influence diagrams (I-DIDs) extend I-IDs (and NIDs) to allow sequential decision-making over several time steps. Just as DIDs are structured graphical representations of POMDPs, I-DIDs are the graphical online analogs for finitely nested I-POMDPs. I-DIDs may be used to optimize over a finite look-ahead given initial beliefs while interacting with other, possibly similar, agents. 4.1 Syntax We depict a general two time-slice I-DID in Fig. 3(a). In addition to the model nodes and the dashed policy link, what differentiates an I-DID from a DID is the model update link shown as a dotted arrow in Fig. 3(a). We explained the semantics of the model node and the policy link in the previous section; we describe the model updates next. The update of the model node over time involves two steps: First, given the models at time t, we identify the updated set of models that reside in the model node at time t + 1. Recall from Section 2 that an agent``s intentional model includes its belief. Because the agents act and receive observations, their models are updated to reflect their changed beliefs. Since the set of optimal actions for a model could include all the actions, and the agent may receive any one of |Ωj| possible observations, the updated set at time step t + 1 will have at most |Mt j,l−1||Aj||Ωj| models. Here, |Mt j,l−1| is the number of models at time step t, |Aj| and |Ωj| are the largest spaces of actions and observations respectively, among all the models. Second, we compute the new distribution over the updated models given the original distribution and the probability of the agent performing the action and receiving the observation that led to the updated model. These steps are a part of agent i``s belief update formalized using Eq. 1. In Fig. 3(b), we show how the dotted model update link is implemented in the I-DID. If each of the two level l − 1 models ascribed to j at time step t results in one action, and j could make one of two possible observations, then the model node at time step t + 1 contains four updated models (mt+1,1 j,l−1 ,mt+1,2 j,l−1 , mt+1,3 j,l−1 , and mt+1,4 j,l−1 ). These models differ in their initial beliefs, each of which is the result of j updating its beliefs due to its action and a possible observation. The decision nodes in each of the I-DIDs or DIDs that represent the lower level models are mapped to the corresponding Figure 4: Transformed I-DID with the model nodes and model update link replaced with the chance nodes and the relationships (in bold). chance nodes, as mentioned previously. Next, we describe how the distribution over the updated set of models (the distribution over the chance node Mod[Mt+1 j ] in Mt+1 j,l−1) is computed. The probability that j``s updated model is, say mt+1,1 j,l−1 , depends on the probability of j performing the action and receiving the observation that led to this model, and the prior distribution over the models at time step t. Because the chance node At j assumes the distribution of each of the action nodes based on the value of Mod[Mt j ], the probability of the action is given by this chance node. In order to obtain the probability of j``s possible observation, we introduce the chance node Oj, which depending on the value of Mod[Mt j ] assumes the distribution of the observation node in the lower level model denoted by Mod[Mt j ]. Because the probability of j``s observations depends on the physical state and the joint actions of both agents, the node Oj is linked with St+1 , At j, and At i. 2 Analogous to At j, the conditional probability table of Oj is also a multiplexer modulated by Mod[Mt j ]. Finally, the distribution over the prior models at time t is obtained from the chance node, Mod[Mt j ] in Mt j,l−1. Consequently, the chance nodes, Mod[Mt j ], At j, and Oj, form the parents of Mod[Mt+1 j ] in Mt+1 j,l−1. Notice that the model update link may be replaced by the dependency links between the chance nodes that constitute the model nodes in the two time slices. In Fig. 4 we show the two time-slice I-DID with the model nodes replaced by the chance nodes and the relationships between them. Chance nodes and dependency links that not in bold are standard, usually found in DIDs. Expansion of the I-DID over more time steps requires the repetition of the two steps of updating the set of models that form the 2 Note that Oj represents j``s observation at time t + 1. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 817 values of the model node and adding the relationships between the chance nodes, as many times as there are model update links. We note that the possible set of models of the other agent j grows exponentially with the number of time steps. For example, after T steps, there may be at most |Mt=1 j,l−1|(|Aj||Ωj|)T −1 candidate models residing in the model node. 4.2 Solution Analogous to I-IDs, the solution to a level l I-DID for agent i expanded over T time steps may be carried out recursively. For the purpose of illustration, let l=1 and T=2. The solution method uses the standard look-ahead technique, projecting the agent``s action and observation sequences forward from the current belief state [17], and finding the possible beliefs that i could have in the next time step. Because agent i has a belief over j``s models as well, the lookahead includes finding out the possible models that j could have in the future. Consequently, each of j``s subintentional or level 0 models (represented using a standard DID) in the first time step must be solved to obtain its optimal set of actions. These actions are combined with the set of possible observations that j could make in that model, resulting in an updated set of candidate models (that include the updated beliefs) that could describe the behavior of j. Beliefs over this updated set of candidate models are calculated using the standard inference methods using the dependency relationships between the model nodes as shown in Fig. 3(b). We note the recursive nature of this solution: in solving agent i``s level 1 I-DID, j``s level 0 DIDs must be solved. If the nesting of models is deeper, all models at all levels starting from 0 are solved in a bottom-up manner. We briefly outline the recursive algorithm for solving agent i``s Algorithm for solving I-DID Input : level l ≥ 1 I-ID or level 0 ID, T Expansion Phase 1. For t from 1 to T − 1 do 2. If l ≥ 1 then Populate Mt+1 j,l−1 3. For each mt j in Range(Mt j,l−1) do 4. Recursively call algorithm with the l − 1 I-ID (or ID) that represents mt j and the horizon, T − t + 1 5. Map the decision node of the solved I-ID (or ID), OPT(mt j), to a chance node Aj 6. For each aj in OPT(mt j) do 7. For each oj in Oj (part of mt j) do 8. Update j``s belief, bt+1 j ← SE(bt j, aj, oj) 9. mt+1 j ← New I-ID (or ID) with bt+1 j as the initial belief 10. Range(Mt+1 j,l−1) ∪ ← {mt+1 j } 11. Add the model node, Mt+1 j,l−1, and the dependency links between Mt j,l−1 and Mt+1 j,l−1 (shown in Fig. 3(b)) 12. Add the chance, decision, and utility nodes for t + 1 time slice and the dependency links between them 13. Establish the CPTs for each chance node and utility node Look-Ahead Phase 14. Apply the standard look-ahead and backup method to solve the expanded I-DID Figure 5: Algorithm for solving a level l ≥ 0 I-DID. level l I-DID expanded over T time steps with one other agent j in Fig. 5. We adopt a two-phase approach: Given an I-ID of level l (described previously in Section 3) with all lower level models also represented as I-IDs or IDs (if level 0), the first step is to expand the level l I-ID over T time steps adding the dependency links and the conditional probability tables for each node. We particularly focus on establishing and populating the model nodes (lines 3-11). Note that Range(·) returns the values (lower level models) of the random variable given as input (model node). In the second phase, we use a standard look-ahead technique projecting the action and observation sequences over T time steps in the future, and backing up the utility values of the reachable beliefs. Similar to I-IDs, the I-DIDs reduce to DIDs in the absence of other agents. As we mentioned previously, the 0-th level models are the traditional DIDs. Their solutions provide probability distributions over actions of the agent modeled at that level to I-DIDs at level 1. Given probability distributions over other agent``s actions the level 1 IDIDs can themselves be solved as DIDs, and provide probability distributions to yet higher level models. Assume that the number of models considered at each level is bound by a number, M. Solving an I-DID of level l in then equivalent to solving O(Ml ) DIDs. 5. EXAMPLE APPLICATIONS To illustrate the usefulness of I-DIDs, we apply them to three problem domains. We describe, in particular, the formulation of the I-DID and the optimal prescriptions obtained on solving it. 5.1 Followership-Leadership in the Multiagent Tiger Problem We begin our illustrations of using I-IDs and I-DIDs with a slightly modified version of the multiagent tiger problem discussed in [9]. The problem has two agents, each of which can open the right door (OR), the left door (OL) or listen (L). In addition to hearing growls (from the left (GL) or from the right (GR)) when they listen, the agents also hear creaks (from the left (CL), from the right (CR), or no creaks (S)), which noisily indicate the other agent``s opening one of the doors. When any door is opened, the tiger persists in its original location with a probability of 95%. Agent i hears growls with a reliability of 65% and creaks with a reliability of 95%. Agent j, on the other hand, hears growls with a reliability of 95%. Thus, the setting is such that agent i hears agent j opening doors more reliably than the tiger``s growls. This suggests that i could use j``s actions as an indication of the location of the tiger, as we discuss below. Each agent``s preferences are as in the single agent game discussed in [13]. The transition, observation, and reward functions are shown in [16]. A good indicator of the usefulness of normative methods for decision-making like I-DIDs is the emergence of realistic social behaviors in their prescriptions. In settings of the persistent multiagent tiger problem that reflect real world situations, we demonstrate followership between the agents and, as shown in [15], deception among agents who believe that they are in a follower-leader type of relationship. In particular, we analyze the situational and epistemological conditions sufficient for their emergence. The followership behavior, for example, results from the agent knowing its own weaknesses, assessing the strengths, preferences, and possible behaviors of the other, and realizing that its best for it to follow the other``s actions in order to maximize its payoffs. Let us consider a particular setting of the tiger problem in which agent i believes that j``s preferences are aligned with its own - both of them just want to get the gold - and j``s hearing is more reliable in comparison to itself. As an example, suppose that j, on listening can discern the tiger``s location 95% of the times compared to i``s 65% accuracy. Additionally, agent i does not have any initial information about the tiger``s location. In other words, i``s single-level nested belief, bi,1, assigns 0.5 to each of the two locations of the tiger. In addition, i considers two models of j, which differ in j``s flat level 0 initial beliefs. This is represented in the level 1 I-ID shown in Fig. 6(a). According to one model, j assigns a probability of 0.9 that the tiger is behind the left door, while the other 818 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 6: (a) Level 1 I-ID of agent i, (b) two level 0 IDs of agent j whose decision nodes are mapped to the chance nodes, A1 j , A2 j , in (a). model assigns 0.1 to that location (see Fig. 6(b)). Agent i is undecided on these two models of j. If we vary i``s hearing ability, and solve the corresponding level 1 I-ID expanded over three time steps, we obtain the normative behavioral policies shown in Fig 7 that exhibit followership behavior. If i``s probability of correctly hearing the growls is 0.65, then as shown in the policy in Fig. 7(a), i begins to conditionally follow j``s actions: i opens the same door that j opened previously iff i``s own assessment of the tiger``s location confirms j``s pick. If i loses the ability to correctly interpret the growls completely, it blindly follows j and opens the same door that j opened previously (Fig. 7(b)). Figure 7: Emergence of (a) conditional followership, and (b) blind followership in the tiger problem. Behaviors of interest are in bold. * is a wildcard, and denotes any one of the observations. We observed that a single level of belief nesting - beliefs about the other``s models - was sufficient for followership to emerge in the tiger problem. However, the epistemological requirements for the emergence of leadership are more complex. For an agent, say j, to emerge as a leader, followership must first emerge in the other agent i. As we mentioned previously, if i is certain that its preferences are identical to those of j, and believes that j has a better sense of hearing, i will follow j``s actions over time. Agent j emerges as a leader if it believes that i will follow it, which implies that j``s belief must be nested two levels deep to enable it to recognize its leadership role. Realizing that i will follow presents j with an opportunity to influence i``s actions in the benefit of the collective good or its self-interest alone. For example, in the tiger problem, let us consider a setting in which if both i and j open the correct door, then each gets a payoff of 20 that is double the original. If j alone selects the correct door, it gets the payoff of 10. On the other hand, if both agents pick the wrong door, their penalties are cut in half. In this setting, it is in both j``s best interest as well as the collective betterment for j to use its expertise in selecting the correct door, and thus be a good leader. However, consider a slightly different problem in which j gains from i``s loss and is penalized if i gains. Specifically, let i``s payoff be subtracted from j``s, indicating that j is antagonistic toward i - if j picks the correct door and i the wrong one, then i``s loss of 100 becomes j``s gain. Agent j believes that i incorrectly thinks that j``s preferences are those that promote the collective good and that it starts off by believing with 99% confidence where the tiger is. Because i believes that its preferences are similar to those of j, and that j starts by believing almost surely that one of the two is the correct location (two level 0 models of j), i will start by following j``s actions. We show i``s normative policy on solving its singly-nested I-DID over three time steps in Fig. 8(a). The policy demonstrates that i will blindly follow j``s actions. Since the tiger persists in its original location with a probability of 0.95, i will select the same door again. If j begins the game with a 99% probability that the tiger is on the right, solving j``s I-DID nested two levels deep, results in the policy shown in Fig. 8(b). Even though j is almost certain that OL is the correct action, it will start by selecting OR, followed by OL. Agent j``s intention is to deceive i who, it believes, will follow j``s actions, so as to gain $110 in the second time step, which is more than what j would gain if it were to be honest. Figure 8: Emergence of deception between agents in the tiger problem. Behaviors of interest are in bold. * denotes as before. (a) Agent i``s policy demonstrating that it will blindly follow j``s actions. (b) Even though j is almost certain that the tiger is on the right, it will start by selecting OR, followed by OL, in order to deceive i. 5.2 Altruism and Reciprocity in the Public Good Problem The public good (PG) problem [7], consists of a group of M agents, each of whom must either contribute some resource to a public pot or keep it for themselves. Since resources contributed to the public pot are shared among all the agents, they are less valuable to the agent when in the public pot. However, if all agents choose to contribute their resources, then the payoff to each agent is more than if no one contributes. Since an agent gets its share of the public pot irrespective of whether it has contributed or not, the dominating action is for each agent to not contribute, and instead free ride on others'' contributions. However, behaviors of human players in empirical simulations of the PG problem differ from the normative predictions. The experiments reveal that many players initially contribute a large amount to the public pot, and continue to contribute when the PG problem is played repeatedly, though in decreasing amounts [4]. Many of these experiments [5] report that a small core group of players persistently contributes to the public pot even when all others are defecting. These experiments also reveal that players who persistently contribute have altruistic or reciprocal preferences matching expected cooperation of others. For simplicity, we assume that the game is played between M = 2 agents, i and j. Let each agent be initially endowed with XT amount of resources. While the classical PG game formulation permits each agent to contribute any quantity of resources (≤ XT ) to the public pot, we simplify the action space by allowing two possible actions. Each agent may choose to either contribute (C) a fixed amount of the resources, or not contribute. The latter action is deThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 819 noted as defect (D). We assume that the actions are not observable to others. The value of resources in the public pot is discounted by ci for each agent i, where ci is the marginal private return. We assume that ci < 1 so that the agent does not benefit enough that it contributes to the public pot for private gain. Simultaneously, ciM > 1, making collective contribution pareto optimal. i/j C D C 2ciXT , 2cjXT ciXT − cp, XT + cjXT − P D XT + ciXT − P, cjXT − cp XT , XT Table 1: The one-shot PG game with punishment. In order to encourage contributions, the contributing agents punish free riders but incur a small cost for administering the punishment. Let P be the punishment meted out to the defecting agent and cp the non-zero cost of punishing for the contributing agent. For simplicity, we assume that the cost of punishing is same for both the agents. The one-shot PG game with punishment is shown in Table. 1. Let ci = cj, cp > 0, and if P > XT − ciXT , then defection is no longer a dominating action. If P < XT − ciXT , then defection is the dominating action for both. If P = XT − ciXT , then the game is not dominance-solvable. Figure 9: (a) Level 1 I-ID of agent i, (b) level 0 IDs of agent j with decision nodes mapped to the chance nodes, A1 j and A2 j , in (a). We formulate a sequential version of the PG problem with punishment from the perspective of agent i. Though in the repeated PG game, the quantity in the public pot is revealed to all the agents after each round of actions, we assume in our formulation that it is hidden from the agents. Each agent may contribute a fixed amount, xc, or defect. An agent on performing an action receives an observation of plenty (PY) or meager (MR) symbolizing the state of the public pot. Notice that the observations are also indirectly indicative of agent j``s actions because the state of the public pot is influenced by them. The amount of resources in agent i``s private pot, is perfectly observable to i. The payoffs are analogous to Table. 1. Borrowing from the empirical investigations of the PG problem [5], we construct level 0 IDs for j that model altruistic and non-altruistic types (Fig. 9(b)). Specifically, our altruistic agent has a high marginal private return (cj is close to 1) and does not punish others who defect. Let xc = 1 and the level 0 agent be punished half the times it defects. With one action remaining, both types of agents choose to contribute to avoid being punished. With two actions to go, the altruistic type chooses to contribute, while the other defects. This is because cj for the altruistic type is close to 1, thus the expected punishment, 0.5P > (1 − cj), which the altruistic type avoids. Because cj for the non-altruistic type is less, it prefers not to contribute. With three steps to go, the altruistic agent contributes to avoid punishment (0.5P > 2(1 − cj)), and the non-altruistic type defects. For greater than three steps, while the altruistic agent continues to contribute to the public pot depending on how close its marginal private return is to 1, the non-altruistic type prescribes defection. We analyzed the decisions of an altruistic agent i modeled using a level 1 I-DID expanded over 3 time steps. i ascribes the two level 0 models, mentioned previously, to j (see Fig. 9). If i believes with a probability 1 that j is altruistic, i chooses to contribute for each of the three steps. This behavior persists when i is unaware of whether j is altruistic (Fig. 10(a)), and when i assigns a high probability to j being the non-altruistic type. However, when i believes with a probability 1 that j is non-altruistic and will thus surely defect, i chooses to defect to avoid being punished and because its marginal private return is less than 1. These results demonstrate that the behavior of our altruistic type resembles that found experimentally. The non-altruistic level 1 agent chooses to defect regardless of how likely it believes the other agent to be altruistic. We analyzed the behavior of a reciprocal agent type that matches expected cooperation or defection. The reciprocal type``s marginal private return is similar to that of the non-altruistic type, however, it obtains a greater payoff when its action is similar to that of the other. We consider the case when the reciprocal agent i is unsure of whether j is altruistic and believes that the public pot is likely to be half full. For this prior belief, i chooses to defect. On receiving an observation of plenty, i decides to contribute, while an observation of meager makes it defect (Fig. 10(b)). This is because an observation of plenty signals that the pot is likely to be greater than half full, which results from j``s action to contribute. Thus, among the two models ascribed to j, its type is likely to be altruistic making it likely that j will contribute again in the next time step. Agent i therefore chooses to contribute to reciprocate j``s action. An analogous reasoning leads i to defect when it observes a meager pot. With one action to go, i believing that j contributes, will choose to contribute too to avoid punishment regardless of its observations. Figure 10: (a) An altruistic level 1 agent always contributes. (b) A reciprocal agent i starts off by defecting followed by choosing to contribute or defect based on its observation of plenty (indicating that j is likely altruistic) or meager (j is non-altruistic). 5.3 Strategies in Two-Player Poker Poker is a popular zero sum card game that has received much attention among the AI research community as a testbed [2]. Poker is played among M ≥ 2 players in which each player receives a hand of cards from a deck. While several flavors of Poker with varying complexity exist, we consider a simple version in which each player has three plys during which the player may either exchange a card (E), keep the existing hand (K), fold (F) and withdraw from the game, or call (C), requiring all players to show their hands. To keep matters simple, let M = 2, and each player receive a hand consisting of a single card drawn from the same suit. Thus, during a showdown, the player who has the numerically larger card (2 is the lowest, ace is the highest) wins the pot. During an exchange of cards, the discarded card is placed either in the L pile, indicating to the other agent that it was a low numbered card less than 8, or in the 820 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) H pile, indicating that the card had a rank greater than or equal to 8. Notice that, for example, if a lower numbered card is discarded, the probability of receiving a low card in exchange is now reduced. We show the level 1 I-ID for the simplified two-player Poker in Fig. 11. We considered two models (personality types) of agent j. The conservative type believes that it is likely that its opponent has a high numbered card in its hand. On the other hand, the aggressive agent j believes with a high probability that its opponent has a lower numbered card. Thus, the two types differ in their beliefs over their opponent``s hand. In both these level 0 models, the opponent is assumed to perform its actions following a fixed, uniform distribution. With three actions to go, regardless of its hand (unless it is an ace), the aggressive agent chooses to exchange its card, with the intent of improving on its current hand. This is because it believes the other to have a low card, which improves its chances of getting a high card during the exchange. The conservative agent chooses to keep its card, no matter its hand because its chances of getting a high card are slim as it believes that its opponent has one. Figure 11: (a) Level 1 I-ID of agent i. The observation reveals information about j``s hand of the previous time step, (b) level 0 IDs of agent j whose decision nodes are mapped to the chance nodes, A1 j , A2 j , in (a). The policy of a level 1 agent i who believes that each card except its own has an equal likelihood of being in j``s hand (neutral personality type) and j could be either an aggressive or conservative type, is shown in Fig. 12. i``s own hand contains the card numbered 8. The agent starts by keeping its card. On seeing that j did not exchange a card (N), i believes with probability 1 that j is conservative and hence will keep its cards. i responds by either keeping its card or exchanging it because j is equally likely to have a lower or higher card. If i observes that j discarded its card into the L or H pile, i believes that j is aggressive. On observing L, i realizes that j had a low card, and is likely to have a high card after its exchange. Because the probability of receiving a low card is high now, i chooses to keep its card. On observing H, believing that the probability of receiving a high numbered card is high, i chooses to exchange its card. In the final step, i chooses to call regardless of its observation history because its belief that j has a higher card is not sufficiently high to conclude that its better to fold and relinquish the payoff. This is partly due to the fact that an observation of, say, L resets the agent i``s previous time step beliefs over j``s hand to the low numbered cards only. 6. DISCUSSION We showed how DIDs may be extended to I-DIDs that enable online sequential decision-making in uncertain multiagent settings. Our graphical representation of I-DIDs improves on the previous Figure 12: A level 1 agent i``s three step policy in the Poker problem. i starts by believing that j is equally likely to be aggressive or conservative and could have any card in its hand with equal probability. work significantly by being more transparent, semantically clear, and capable of being solved using standard algorithms that target DIDs. I-DIDs extend NIDs to allow sequential decision-making over multiple time steps in the presence of other interacting agents. I-DIDs may be seen as concise graphical representations for IPOMDPs providing a way to exploit problem structure and carry out online decision-making as the agent acts and observes given its prior beliefs. We are currently investigating ways to solve I-DIDs approximately with provable bounds on the solution quality. Acknowledgment: We thank Piotr Gmytrasiewicz for some useful discussions related to this work. The first author would like to acknowledge the support of a UGARF grant. 7. REFERENCES [1] R. J. Aumann. Interactive epistemology i: Knowledge. International Journal of Game Theory, 28:263-300, 1999. [2] D. Billings, A. Davidson, J. Schaeffer, and D. Szafron. The challenge of poker. AIJ, 2001. [3] A. Brandenburger and E. Dekel. Hierarchies of beliefs and common knowledge. Journal of Economic Theory, 59:189-198, 1993. [4] C. Camerer. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press, 2003. [5] E. Fehr and S. Gachter. Cooperation and punishment in public goods experiments. American Economic Review, 90(4):980-994, 2000. [6] D. Fudenberg and D. K. Levine. The Theory of Learning in Games. MIT Press, 1998. [7] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991. [8] Y. Gal and A. Pfeffer. A language for modeling agent``s decision-making processes in games. In AAMAS, 2003. [9] P. Gmytrasiewicz and P. Doshi. A framework for sequential planning in multiagent settings. JAIR, 24:49-79, 2005. [10] P. Gmytrasiewicz and E. Durfee. Rational coordination in multi-agent environments. JAAMAS, 3(4):319-350, 2000. [11] J. C. Harsanyi. Games with incomplete information played by bayesian players. Management Science, 14(3):159-182, 1967. [12] R. A. Howard and J. E. Matheson. Influence diagrams. In R. A. Howard and J. E. Matheson, editors, The Principles and Applications of Decision Analysis. Strategic Decisions Group, Menlo Park, CA 94025, 1984. [13] L. Kaelbling, M. Littman, and A. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence Journal, 2, 1998. [14] D. Koller and B. Milch. Multi-agent influence diagrams for representing and solving games. In IJCAI, pages 1027-1034, 2001. [15] K. Polich and P. Gmytrasiewicz. Interactive dynamic influence diagrams. In GTDT Workshop, AAMAS, 2006. [16] B. Rathnas., P. Doshi, and P. J. Gmytrasiewicz. Exact solutions to interactive pomdps using behavioral equivalence. In Autonomous Agents and Multi-Agent Systems Conference (AAMAS), 2006. [17] S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach (Second Edition). Prentice Hall, 2003. [18] R. D. Shachter. Evaluating influence diagrams. Operations Research, 34(6):871-882, 1986. [19] D. Suryadi and P. Gmytrasiewicz. Learning models of other agents using influence diagrams. In UM, 1999. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 821
Graphical Models for Online Solutions to Interactive POMDPs ABSTRACT We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness. 1. INTRODUCTION Interactive partially observable Markov decision processes (IPOMDPs) [9] provide a framework for sequential decision-making in partially observable multiagent environments. They generalize POMDPs [13] to multiagent settings by including the other agents' computable models in the state space along with the states of the physical environment. The models encompass all information influencing the agents' behaviors, including their preferences, capabilities, and beliefs, and are thus analogous to types in Bayesian games [11]. I-POMDPs adopt a subjective approach to understanding strategic behavior, rooted in a decision-theoretic framework that takes a decision-maker's perspective in the interaction. In [15], Polich and Gmytrasiewicz introduced interactive dynamic influence diagrams (I-DIDs) as the computational representations of I-POMDPs. I-DIDs generalize DIDs [12], which may be viewed as computational counterparts of POMDPs, to multiagents settings in the same way that I-POMDPs generalize POMDPs. I-DIDs contribute to a growing line of work [19] that includes multi-agent influence diagrams (MAIDs) [14], and more recently, networks of influence diagrams (NIDs) [8]. These formalisms seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. MAIDs provide an alternative to normal and extensive game forms using a graphical formalism to represent games of imperfect information with a decision node for each agent's actions and chance nodes capturing the agent's private information. MAIDs objectively analyze the game, efficiently computing the Nash equilibrium profile by exploiting the independence structure. NIDs extend MAIDs to include agents' uncertainty over the game being played and over models of the other agents. Each model is a MAID and the network of MAIDs is collapsed, bottom up, into a single MAID for computing the equilibrium of the game keeping in mind the different models of each agent. Graphical formalisms such as MAIDs and NIDs open up a promising area of research that aims to represent multiagent interactions more transparently. However, MAIDs provide an analysis of the game from an external viewpoint and the applicability of both is limited to static single play games. Matters are more complex when we consider interactions that are extended over time, where predictions about others' future actions must be made using models that change as the agents act and observe. I-DIDs address this gap by allowing the representation of other agents' models as the values of a special model node. Both, other agents' models and the original agent's beliefs over these models are updated over time using special-purpose implementations. In this paper, we improve on the previous preliminary representation of the I-DID shown in [15] by using the insight that the static I-ID is a type of NID. Thus, we may utilize NID-specific language constructs such as multiplexers to represent the model node, and subsequently the I-ID, more transparently. Furthermore, we clarify the semantics of the special purpose "policy link" introduced in the representation of I-DID by [15], and show that it could be replaced by traditional dependency links. In the previous representation of the I-DID, the update of the agent's belief over the models of others as the agents act and receive observations was denoted using a special link called the "model update link" that connected the model nodes over time. We explicate the semantics of this link by showing how it can be implemented using the traditional dependency links between the chance nodes that constitute the model nodes. The net result is a representation of I-DID that is significantly more 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS transparent, semantically clear, and capable of being implemented using the standard algorithms for solving DIDs. We show how IDIDs may be used to model an agent's uncertainty over others' models, that may themselves be I-DIDs. Solution to the I-DID is a policy that prescribes what the agent should do over time, given its beliefs over the physical state and others' models. Analogous to DIDs, I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. 2. BACKGROUND: FINITELY NESTED IPOMDPS Interactive POMDPs generalize POMDPs to multiagent settings by including other agents' models as part of the state space [9]. Since other agents may also reason about others, the interactive state space is strategically nested; it contains beliefs about other agents' models and their beliefs about others. For simplicity of presentation we consider an agent, i, that is interacting with one other agent, j. A finitely nested I-POMDP of agent i with a strategy level l is defined as the tuple: where: • ISi, l denotes a set of interactive states defined as, ISi, l = S × Mj, l − 1, where Mj, l − 1 = {19j, l − 1 ∪ SMj}, for l ≥ 1, and ISi,0 = S, where S is the set of states of the physical environment. 19j, l − 1 is the set of computable intentional models of agent j: θj, l − 1 = ~ bj, l − 1, ˆθj ~ where the frame, ˆθj = ~ A, Ωj, Tj, Oj, Rj, OCj ~. Here, j is Bayes rational and OCj is j's optimality criterion. SMj is the set of subintentional models of j. Simple examples of subintentional models include a no-information model [10] and a fictitious play model [6], both of which are history independent. We give a recursive bottom-up construction of the interactive state space below. Similar formulations of nested spaces have appeared in [1, 3]. • A = Ai × Aj is the set of joint actions of all agents in the environment; • Ti: S × A × S → [0, 1], describes the effect of the joint actions on the physical states of the environment; • Ωi is the set of observations of agent i; • Oi: S × A × Ωi → [0, 1] gives the likelihood of the observations given the physical state and joint action; • Ri: ISi × A → R describes agent i's preferences over its interactive states. Usually only the physical states will matter. Agent i's policy is the mapping, Ω ∗ i → Δ (Ai), where Ω ∗ i is the set of all observation histories of agent i. Since belief over the interactive states forms a sufficient statistic [9], the policy can also be represented as a mapping from the set of all beliefs of agent i to a distribution over its actions, Δ (ISi) → Δ (Ai). 2.1 Belief Update Analogous to POMDPs, an agent within the I-POMDP framework updates its belief as it acts and observes. However, there are two differences that complicate the belief update in multiagent settings when compared to single agent ones. First, since the state of the physical environment depends on the actions of both agents, i's prediction of how the physical state changes has to be made based on its prediction of j's actions. Second, changes in j's models have to be included in i's belief update. Specifically, if j is intentional then an update of j's beliefs due to its action and observation has to be included. In other words, i has to update its belief based on its prediction of what j would observe and how j would update its belief. If j's model is subintentional, then j's probable observations are appended to the observation history contained in the model. Formally, we have: Pr (ist | at − 1 i, bt − 1 is an abbreviation for the belief update. For a version of the belief update when j's model is subintentional, see [9]. If agent j is also modeled as an I-POMDP, then i's belief update invokes j's belief update (via the term SEθtj (bt − 1 j, otj)), which in turn could invoke i's belief update and so on. This recursion in belief nesting bottoms out at the 0th level. At this level, the belief update of the agent reduces to a POMDP belief update. 1 For illustrations of the belief update, additional details on I-POMDPs, and how they compare with other multiagent frameworks, see [9]. 2.2 Value Iteration Each belief state in a finitely nested I-POMDP has an associated value reflecting the maximum payoff the agent can expect in this belief state: where, ERi (is, ai) = aj Ri (is, ai, aj) Pr (aj | mj, l − 1) (since is = (s, mj, l − 1)). Eq. 2 is a basis for value iteration in I-POMDPs. Agent i's optimal action, a ∗ i, for the case of finite horizon with discounting, is an element of the set of optimal actions for the belief state, OPT (θi), defined as: 3. INTERACTIVE INFLUENCE DIAGRAMS A naive extension of influence diagrams (IDs) to settings populated by multiple agents is possible by treating other agents as automatons, represented using chance nodes. However, this approach assumes that the agents' actions are controlled using a probability distribution that does not change over time. Interactive influence diagrams (I-IDs) adopt a more sophisticated approach by generalizing IDs to make them applicable to settings shared with other agents who may act and observe, and update their beliefs. 3.1 Syntax In addition to the usual chance, decision, and utility nodes, IIDs include a new type of node called the model node. We show a general level l I-ID in Fig. 1 (a), where the model node (Mj, l − 1) is denoted using a hexagon. We note that the probability distribution over the chance node, S, and the model node together represents agent i's belief over its interactive states. In addition to the model 1The 0th level model is a POMDP: Other agent's actions are treated as exogenous events and folded into the T, O, and R functions. Figure 1: (a) A generic level l I-ID for agent i situated with one other agent j. The hexagon is the model node (Mj, l_1) whose structure we show in (b). Members of the model node are I-IDs themselves (m1j, l_1, m2j, l_1; diagrams not shown here for simplicity) whose decision nodes are mapped to the corresponding chance nodes (A1j, A2j). Depending on the value of the node, Mod [Mj], the distribution of each of the chance nodes is assigned to the node Aj. (c) The transformed I-ID with the model node replaced by the chance nodes and the relationships between them. node, I-IDs differ from IDs by having a dashed link (called the "policy link" in [15]) between the model node and a chance node, Aj, that represents the distribution over the other agent's actions given its model. In the absence of other agents, the model node and the chance node, Aj, vanish and I-IDs collapse into traditional IDs. The model node contains the alternative computational models ascribed by i to the other agent from the set, Θj, l_1 ∪ SMj, where Θj, l_1 and SMj were defined previously in Section 2. Thus, a model in the model node may itself be an I-ID or ID, and the recursion terminates when a model is an ID or subintentional. Because the model node contains the alternative models of the other agent as its values, its representation is not trivial. In particular, some of the models within the node are I-IDs that when solved generate the agent's optimal policy in their decision nodes. Each decision node is mapped to the corresponding chance node, say A1j, in the following way: if OPT is the set of optimal actions obtained by solving the I-ID (or ID), then Pr (aj ∈ A1j) = 1 1OP T 1 if aj ∈ OPT, 0 otherwise. Borrowing insights from previous work [8], we observe that the model node and the dashed "policy link" that connects it to the chance node, Aj, could be represented as shown in Fig. 1 (b). The decision node of each level l − 1 I-ID is transformed into a chance node, as we mentioned previously, so that the actions with the largest value in the decision node are assigned uniform probabilities in the chance node while the rest are assigned zero probability. The different chance nodes (A1j, A2j), one for each model, and additionally, the chance node labeled Mod [Mj] form the parents of the chance node, Aj. Thus, there are as many action nodes (A1j, A2j) in Mj, l_1 as the number of models in the support of agent i's beliefs. The conditional probability table of the chance node, Aj, is a multiplexer that assumes the distribution of each of the action nodes (A1j, A2j) depending on the value of Mod [Mj]. The values of Mod [Mj] denote the different models of j. In other words, when Mod [Mj] has the value _ 1j, l_1, the chance node Aj assumes the distribution of the node A1j, and Aj assumes the distribution of A2j when Mod [Mj] has the value _ 2j, l_1. The distribution over the node, Mod [Mj], is the agent i's belief over the models of j given a physical state. For more agents, we will have as many model nodes as there are agents. Notice that Fig. 1 (b) clarifies the semantics of the "policy link", and shows how it can be represented using the traditional dependency links. In Fig. 1 (c), we show the transformed I-ID when the model node is replaced by the chance nodes and relationships between them. In contrast to the representation in [15], there are no special-purpose "policy links", rather the I-ID is composed of only those types of nodes that are found in traditional IDs and dependency relationships between the nodes. This allows I-IDs to be represented and implemented using conventional application tools that target IDs. Note that we may view the level l I-ID as a NID. Specifically, each of the level l − 1 models within the model node are blocks in the NID (see Fig. 2). If the level l = 1, each block is a traditional ID, otherwise if l> 1, each block within the NID may itself be a NID. Note that within the I-IDs (or IDs) at each level, there is only a single decision node. Thus, our NID does not contain any MAIDs. Figure 2: A level l I-ID represented as a NID. The probabilities assigned to the blocks of the NID are i's beliefs over j's models conditioned on a physical state. 3.2 Solution The solution of an I-ID proceeds in a bottom-up manner, and is implemented recursively. We start by solving the level 0 models, which, if intentional, are traditional IDs. Their solutions provide probability distributions over the other agents' actions, which are entered in the corresponding chance nodes found in the model node of the level 1 I-ID. The mapping from the level 0 models' decision nodes to the chance nodes is carried out so that actions with the largest value in the decision node are assigned uniform probabilities in the chance node while the rest are assigned zero probability. Given the distributions over the actions within the different chance nodes (one for each model of the other agent), the level 1 I-ID is transformed as shown in Fig. 1 (c). During the transformation, the conditional probability table (CPT) of the node, Aj, is populated such that the node assumes the distribution of each of the chance nodes depending on the value of the node, Mod [Mj]. As we mentioned previously, the values of the node Mod [Mj] denote the different models of the other agent, and its distribution is the agent i's belief over the models of j conditioned on the physical state. The transformed level 1 I-ID is a traditional ID that may be solved us 816 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: (a) A generic two time-slice level l I-DID for agent i in a setting with one other agent j. Notice the dotted model update link that denotes the update of the models of j and the distribution over the models over time. (b) The semantics of the model update link. ing the standard expected utility maximization method [18]. This procedure is carried out up to the level l I-ID whose solution gives the non-empty set of optimal actions that the agent should perform given its belief. Notice that analogous to IDs, I-IDs are suitable for online decision-making when the agent's current belief is known. 4. INTERACTIVE DYNAMIC INFLUENCE DIAGRAMS Interactive dynamic influence diagrams (I-DIDs) extend I-IDs (and NIDs) to allow sequential decision-making over several time steps. Just as DIDs are structured graphical representations of POMDPs, I-DIDs are the graphical online analogs for finitely nested I-POMDPs. I-DIDs may be used to optimize over a finite look-ahead given initial beliefs while interacting with other, possibly similar, agents. 4.1 Syntax We depict a general two time-slice I-DID in Fig. 3 (a). In addition to the model nodes and the dashed policy link, what differentiates an I-DID from a DID is the model update link shown as a dotted arrow in Fig. 3 (a). We explained the semantics of the model node and the policy link in the previous section; we describe the model updates next. The update of the model node over time involves two steps: First, given the models at time t, we identify the updated set of models that reside in the model node at time t + 1. Recall from Section 2 that an agent's intentional model includes its belief. Because the agents act and receive observations, their models are updated to reflect their changed beliefs. Since the set of optimal actions for a model could include all the actions, and the agent may receive any one of IΩj I possible observations, the updated set at time step t + 1 will have at most IMtj, l − 1IIAj IIΩj I models. Here, IMtj, l − 1I is the number of models at time step t, IAj I and IΩj I are the largest spaces of actions and observations respectively, among all the models. Second, we compute the new distribution over the updated models given the original distribution and the probability of the agent performing the action and receiving the observation that led to the updated model. These steps are a part of agent i's belief update formalized using Eq. 1. In Fig. 3 (b), we show how the dotted model update link is implemented in the I-DID. If each of the two level l--1 models ascribed to j at time step t results in one action, and j could make one of two possible observations, then the model node at time step ant +1,4 j, l − 1). These models differ in their initial beliefs, each of which is the result of j updating its beliefs due to its action and a possible observation. The decision nodes in each of the I-DIDs or DIDs that represent the lower level models are mapped to the corresponding Figure 4: Transformed I-DID with the model nodes and model update link replaced with the chance nodes and the relationships (in bold). chance nodes, as mentioned previously. Next, we describe how the distribution over the updated set of models (the distribution over the chance node Mod [Mt +1 j] in Mt +1 j, l − 1) is computed. The probability that j's updated model is, say ant +1,1 j, l − 1, depends on the probability of j performing the action and receiving the observation that led to this model, and the prior distribution over the models at time step t. Because the chance node Atj assumes the distribution of each of the action nodes based on the value of Mod [Mjt], the probability of the action is given by this chance node. In order to obtain the probability of j's possible observation, we introduce the chance node Oj, which depending on the value of Mod [Mjt] assumes the distribution of the observation node in the lower level model denoted by Mod [Mjt]. Because the probability of j's observations depends on the physical state and the joint actions of both agents, the node Oj is linked with St +1, Atj, and Ati. 2 Analogous to Atj, the conditional probability table of Oj is also a multiplexer modulated by Mod [Mjt]. Finally, the distribution over the prior models at time t is obtained from the chance node, Mod [Mjt] in Mtj, l − 1. Consequently, the chance nodes, Mod [Mjt], Atj, and Oj, form the parents of Mod [Mj t +1] in Mt +1 j, l − 1. Notice that the model update link may be replaced by the dependency links between the chance nodes that constitute the model nodes in the two time slices. In Fig. 4 we show the two time-slice I-DID with the model nodes replaced by the chance nodes and the relationships between them. Chance nodes and dependency links that not in bold are standard, usually found in DIDs. Expansion of the I-DID over more time steps requires the repetition of the two steps of updating the set of models that form the The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 817 values of the model node and adding the relationships between the chance nodes, as many times as there are model update links. We note that the possible set of models of the other agent j grows exponentially with the number of time steps. For example, after T steps, there may be at most 1Mt = 1 j, l_11 (1Aj11Ωj 1) T _ 1 candidate models residing in the model node. 4.2 Solution Analogous to I-IDs, the solution to a level l I-DID for agent i expanded over T time steps may be carried out recursively. For the purpose of illustration, let l = 1 and T = 2. The solution method uses the standard look-ahead technique, projecting the agent's action and observation sequences forward from the current belief state [17], and finding the possible beliefs that i could have in the next time step. Because agent i has a belief over j's models as well, the lookahead includes finding out the possible models that j could have in the future. Consequently, each of j's subintentional or level 0 models (represented using a standard DID) in the first time step must be solved to obtain its optimal set of actions. These actions are combined with the set of possible observations that j could make in that model, resulting in an updated set of candidate models (that include the updated beliefs) that could describe the behavior of j. Beliefs over this updated set of candidate models are calculated using the standard inference methods using the dependency relationships between the model nodes as shown in Fig. 3 (b). We note the recursive nature of this solution: in solving agent i's level 1I-DID, j's level 0 DIDs must be solved. If the nesting of models is deeper, all models at all levels starting from 0 are solved in a bottom-up manner. We briefly outline the recursive algorithm for solving agent i 's 3. For each mt jin Range (Mtj, l_1) do 4. Recursively call algorithm with the l--1 I-ID (or ID) that represents mtj and the horizon, T--t + 1 5. Map the decision node of the solved I-ID (or ID), OPT (mt j), to a chance node Aj 6. For each aj in OPT (mt j) do 7. For each oj in Oj (part of mtj) do Figure 5: Algorithm for solving a level l> 0I-DID. level l I-DID expanded over T time steps with one other agent j in Fig. 5. We adopt a two-phase approach: Given an I-ID of level l (described previously in Section 3) with all lower level models also represented as I-IDs or IDs (if level 0), the first step is to expand the level l I-ID over T time steps adding the dependency links and the conditional probability tables for each node. We particularly focus on establishing and populating the model nodes (lines 3-11). Note that Range (.) returns the values (lower level models) of the random variable given as input (model node). In the second phase, we use a standard look-ahead technique projecting the action and observation sequences over T time steps in the future, and backing up the utility values of the reachable beliefs. Similar to I-IDs, the I-DIDs reduce to DIDs in the absence of other agents. As we mentioned previously, the 0-th level models are the traditional DIDs. Their solutions provide probability distributions over actions of the agent modeled at that level to I-DIDs at level 1. Given probability distributions over other agent's actions the level 1 IDIDs can themselves be solved as DIDs, and provide probability distributions to yet higher level models. Assume that the number of models considered at each level is bound by a number, M. Solving an I-DID of level l in then equivalent to solving O (Ml) DIDs. 5. EXAMPLE APPLICATIONS To illustrate the usefulness of I-DIDs, we apply them to three problem domains. We describe, in particular, the formulation of the I-DID and the optimal prescriptions obtained on solving it. 5.1 Followership-Leadership in the Multiagent Tiger Problem We begin our illustrations of using I-IDs and I-DIDs with a slightly modified version of the multiagent tiger problem discussed in [9]. The problem has two agents, each of which can open the right door (OR), the left door (OL) or listen (L). In addition to hearing growls (from the left (GL) or from the right (GR)) when they listen, the agents also hear creaks (from the left (CL), from the right (CR), or no creaks (S)), which noisily indicate the other agent's opening one of the doors. When any door is opened, the tiger persists in its original location with a probability of 95%. Agent i hears growls with a reliability of 65% and creaks with a reliability of 95%. Agent j, on the other hand, hears growls with a reliability of 95%. Thus, the setting is such that agent i hears agent j opening doors more reliably than the tiger's growls. This suggests that i could use j's actions as an indication of the location of the tiger, as we discuss below. Each agent's preferences are as in the single agent game discussed in [13]. The transition, observation, and reward functions are shown in [16]. A good indicator of the usefulness of normative methods for decision-making like I-DIDs is the emergence of realistic social behaviors in their prescriptions. In settings of the persistent multiagent tiger problem that reflect real world situations, we demonstrate followership between the agents and, as shown in [15], deception among agents who believe that they are in a follower-leader type of relationship. In particular, we analyze the situational and epistemological conditions sufficient for their emergence. The followership behavior, for example, results from the agent knowing its own weaknesses, assessing the strengths, preferences, and possible behaviors of the other, and realizing that its best for it to follow the other's actions in order to maximize its payoffs. Let us consider a particular setting of the tiger problem in which agent i believes that j's preferences are aligned with its own - both of them just want to get the gold - and j's hearing is more reliable in comparison to itself. As an example, suppose that j, on listening can discern the tiger's location 95% of the times compared to i's 65% accuracy. Additionally, agent i does not have any initial information about the tiger's location. In other words, i's single-level nested belief, bi,1, assigns 0.5 to each of the two locations of the tiger. In addition, i considers two models of j, which differ in j's flat level 0 initial beliefs. This is represented in the level 1 I-ID shown in Fig. 6 (a). According to one model, j assigns a probability of 0.9 that the tiger is behind the left door, while the other 818 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 6: (a) Level 1 I-ID of agent i, (b) two level 0 IDs of agent j whose decision nodes are mapped to the chance nodes, A1j, A2j, in (a). model assigns 0.1 to that location (see Fig. 6 (b)). Agent i is undecided on these two models of j. If we vary i's hearing ability, and solve the corresponding level 1 I-ID expanded over three time steps, we obtain the normative behavioral policies shown in Fig 7 that exhibit followership behavior. If i's probability of correctly hearing the growls is 0.65, then as shown in the policy in Fig. 7 (a), i begins to conditionally follow j's actions: i opens the same door that j opened previously iff i's own assessment of the tiger's location confirms j's pick. If i loses the ability to correctly interpret the growls completely, it blindly follows j and opens the same door that j opened previously (Fig. 7 (b)). Figure 7: Emergence of (a) conditional followership, and (b) blind followership in the tiger problem. Behaviors of interest are in bold. * is a wildcard, and denotes any one of the observations. We observed that a single level of belief nesting - beliefs about the other's models - was sufficient for followership to emerge in the tiger problem. However, the epistemological requirements for the emergence of leadership are more complex. For an agent, say j, to emerge as a leader, followership must first emerge in the other agent i. As we mentioned previously, if i is certain that its preferences are identical to those of j, and believes that j has a better sense of hearing, i will follow j's actions over time. Agent j emerges as a leader if it believes that i will follow it, which implies that j's belief must be nested two levels deep to enable it to recognize its leadership role. Realizing that i will follow presents j with an opportunity to influence i's actions in the benefit of the collective good or its self-interest alone. For example, in the tiger problem, let us consider a setting in which if both i and j open the correct door, then each gets a payoff of 20 that is double the original. If j alone selects the correct door, it gets the payoff of 10. On the other hand, if both agents pick the wrong door, their penalties are cut in half. In this setting, it is in both j's best interest as well as the collective betterment for j to use its expertise in selecting the correct door, and thus be a good leader. However, consider a slightly different problem in which j gains from i's loss and is penalized if i gains. Specifically, let i's payoff be subtracted from j's, indicating that j is antagonistic toward i - if j picks the correct door and i the wrong one, then i's loss of 100 becomes j's gain. Agent j believes that i incorrectly thinks that j's preferences are those that promote the collective good and that it starts off by believing with 99% confidence where the tiger is. Because i believes that its preferences are similar to those of j, and that j starts by believing almost surely that one of the two is the correct location (two level 0 models of j), i will start by following j's actions. We show i's normative policy on solving its singly-nested I-DID over three time steps in Fig. 8 (a). The policy demonstrates that i will blindly follow j's actions. Since the tiger persists in its original location with a probability of 0.95, i will select the same door again. If j begins the game with a 99% probability that the tiger is on the right, solving j's I-DID nested two levels deep, results in the policy shown in Fig. 8 (b). Even though j is almost certain that OL is the correct action, it will start by selecting OR, followed by OL. Agent j's intention is to deceive i who, it believes, will follow j's actions, so as to gain $110 in the second time step, which is more than what j would gain if it were to be honest. Figure 8: Emergence of deception between agents in the tiger problem. Behaviors of interest are in bold. * denotes as before. (a) Agent i's policy demonstrating that it will blindly follow j's actions. (b) Even though j is almost certain that the tiger is on the right, it will start by selecting OR, followed by OL, in order to deceive i. 5.2 Altruism and Reciprocity in the Public Good Problem The public good (PG) problem [7], consists of a group of M agents, each of whom must either contribute some resource to a public pot or keep it for themselves. Since resources contributed to the public pot are shared among all the agents, they are less valuable to the agent when in the public pot. However, if all agents choose to contribute their resources, then the payoff to each agent is more than if no one contributes. Since an agent gets its share of the public pot irrespective of whether it has contributed or not, the dominating action is for each agent to not contribute, and instead "free ride" on others' contributions. However, behaviors of human players in empirical simulations of the PG problem differ from the normative predictions. The experiments reveal that many players initially contribute a large amount to the public pot, and continue to contribute when the PG problem is played repeatedly, though in decreasing amounts [4]. Many of these experiments [5] report that a small core group of players persistently contributes to the public pot even when all others are defecting. These experiments also reveal that players who persistently contribute have altruistic or reciprocal preferences matching expected cooperation of others. For simplicity, we assume that the game is played between M = 2 agents, i and j. Let each agent be initially endowed with XT amount of resources. While the classical PG game formulation permits each agent to contribute any quantity of resources (≤ XT) to the public pot, we simplify the action space by allowing two possible actions. Each agent may choose to either contribute (C) a fixed amount of the resources, or not contribute. The latter action is de The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 819 noted as defect (D). We assume that the actions are not observable to others. The value of resources in the public pot is discounted by ci for each agent i, where ci is the marginal private return. We assume that ci <1 so that the agent does not benefit enough that it contributes to the public pot for private gain. Simultaneously, ciM> 1, making collective contribution pareto optimal. Table 1: The one-shot PG game with punishment. In order to encourage contributions, the contributing agents punish free riders but incur a small cost for administering the punishment. Let P be the punishment meted out to the defecting agent and cp the non-zero cost of punishing for the contributing agent. For simplicity, we assume that the cost of punishing is same for both the agents. The one-shot PG game with punishment is shown in Table. 1. Let ci = cj, cp> 0, and if P> XT--ciXT, then defection is no longer a dominating action. If P <XT--ciXT, then defection is the dominating action for both. If P = XT--ciXT, then the game is not dominance-solvable. Figure 9: (a) Level 1 I-ID of agent i, (b) level 0 IDs of agent j with decision nodes mapped to the chance nodes, A1j and A2j, in (a). We formulate a sequential version of the PG problem with punishment from the perspective of agent i. Though in the repeated PG game, the quantity in the public pot is revealed to all the agents after each round of actions, we assume in our formulation that it is hidden from the agents. Each agent may contribute a fixed amount, xc, or defect. An agent on performing an action receives an observation of plenty (PY) or meager (MR) symbolizing the state of the public pot. Notice that the observations are also indirectly indicative of agent j's actions because the state of the public pot is influenced by them. The amount of resources in agent i's private pot, is perfectly observable to i. The payoffs are analogous to Table. 1. Borrowing from the empirical investigations of the PG problem [5], we construct level 0 IDs for j that model altruistic and non-altruistic types (Fig. 9 (b)). Specifically, our altruistic agent has a high marginal private return (cj is close to 1) and does not punish others who defect. Let xc = 1 and the level 0 agent be punished half the times it defects. With one action remaining, both types of agents choose to contribute to avoid being punished. With two actions to go, the altruistic type chooses to contribute, while the other defects. This is because cj for the altruistic type is close to 1, thus the expected punishment, 0.5 P> (1--cj), which the altruistic type avoids. Because cj for the non-altruistic type is less, it prefers not to contribute. With three steps to go, the altruistic agent contributes to avoid punishment (0.5 P> 2 (1--cj)), and the non-altruistic type defects. For greater than three steps, while the altruistic agent continues to contribute to the public pot depending on how close its marginal private return is to 1, the non-altruistic type prescribes defection. We analyzed the decisions of an altruistic agent i modeled using a level 1I-DID expanded over 3 time steps. i ascribes the two level 0 models, mentioned previously, to j (see Fig. 9). If i believes with a probability 1 that j is altruistic, i chooses to contribute for each of the three steps. This behavior persists when i is unaware of whether j is altruistic (Fig. 10 (a)), and when i assigns a high probability to j being the non-altruistic type. However, when i believes with a probability 1 that j is non-altruistic and will thus surely defect, i chooses to defect to avoid being punished and because its marginal private return is less than 1. These results demonstrate that the behavior of our altruistic type resembles that found experimentally. The non-altruistic level 1 agent chooses to defect regardless of how likely it believes the other agent to be altruistic. We analyzed the behavior of a reciprocal agent type that matches expected cooperation or defection. The reciprocal type's marginal private return is similar to that of the non-altruistic type, however, it obtains a greater payoff when its action is similar to that of the other. We consider the case when the reciprocal agent i is unsure of whether j is altruistic and believes that the public pot is likely to be half full. For this prior belief, i chooses to defect. On receiving an observation of plenty, i decides to contribute, while an observation of meager makes it defect (Fig. 10 (b)). This is because an observation of plenty signals that the pot is likely to be greater than half full, which results from j's action to contribute. Thus, among the two models ascribed to j, its type is likely to be altruistic making it likely that j will contribute again in the next time step. Agent i therefore chooses to contribute to reciprocate j's action. An analogous reasoning leads i to defect when it observes a meager pot. With one action to go, i believing that j contributes, will choose to contribute too to avoid punishment regardless of its observations. Figure 10: (a) An altruistic level 1 agent always contributes. (b) A reciprocal agent i starts off by defecting followed by choosing to contribute or defect based on its observation of plenty (indicating that j is likely altruistic) or meager (j is non-altruistic). 5.3 Strategies in Two-Player Poker Poker is a popular zero sum card game that has received much attention among the AI research community as a testbed [2]. Poker is played among M> 2 players in which each player receives a hand of cards from a deck. While several flavors of Poker with varying complexity exist, we consider a simple version in which each player has three plys during which the player may either exchange a card (E), keep the existing hand (K), fold (F) and withdraw from the game, or call (C), requiring all players to show their hands. To keep matters simple, let M = 2, and each player receive a hand consisting of a single card drawn from the same suit. Thus, during a showdown, the player who has the numerically larger card (2 is the lowest, ace is the highest) wins the pot. During an exchange of cards, the discarded card is placed either in the L pile, indicating to the other agent that it was a low numbered card less than 8, or in the 820 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) H pile, indicating that the card had a rank greater than or equal to 8. Notice that, for example, if a lower numbered card is discarded, the probability of receiving a low card in exchange is now reduced. We show the level 1 I-ID for the simplified two-player Poker in Fig. 11. We considered two models (personality types) of agent j. The conservative type believes that it is likely that its opponent has a high numbered card in its hand. On the other hand, the aggressive agent j believes with a high probability that its opponent has a lower numbered card. Thus, the two types differ in their beliefs over their opponent's hand. In both these level 0 models, the opponent is assumed to perform its actions following a fixed, uniform distribution. With three actions to go, regardless of its hand (unless it is an ace), the aggressive agent chooses to exchange its card, with the intent of improving on its current hand. This is because it believes the other to have a low card, which improves its chances of getting a high card during the exchange. The conservative agent chooses to keep its card, no matter its hand because its chances of getting a high card are slim as it believes that its opponent has one. Figure 11: (a) Level 1 I-ID of agent i. The observation reveals information about j's hand of the previous time step, (b) level 0 IDs of agent j whose decision nodes are mapped to the chance nodes, A1j, A2j, in (a). The policy of a level 1 agent i who believes that each card except its own has an equal likelihood of being in j's hand (neutral personality type) and j could be either an aggressive or conservative type, is shown in Fig. 12. i's own hand contains the card numbered 8. The agent starts by keeping its card. On seeing that j did not exchange a card (N), i believes with probability 1 that j is conservative and hence will keep its cards. i responds by either keeping its card or exchanging it because j is equally likely to have a lower or higher card. If i observes that j discarded its card into the L or H pile, i believes that j is aggressive. On observing L, i realizes that j had a low card, and is likely to have a high card after its exchange. Because the probability of receiving a low card is high now, i chooses to keep its card. On observing H, believing that the probability of receiving a high numbered card is high, i chooses to exchange its card. In the final step, i chooses to call regardless of its observation history because its belief that j has a higher card is not sufficiently high to conclude that its better to fold and relinquish the payoff. This is partly due to the fact that an observation of, say, L resets the agent i's previous time step beliefs over j's hand to the low numbered cards only. 6. DISCUSSION We showed how DIDs may be extended to I-DIDs that enable online sequential decision-making in uncertain multiagent settings. Our graphical representation of I-DIDs improves on the previous Figure 12: A level 1 agent i's three step policy in the Poker problem. i starts by believing that j is equally likely to be aggressive or conservative and could have any card in its hand with equal probability. work significantly by being more transparent, semantically clear, and capable of being solved using standard algorithms that target DIDs. I-DIDs extend NIDs to allow sequential decision-making over multiple time steps in the presence of other interacting agents. I-DIDs may be seen as concise graphical representations for IPOMDPs providing a way to exploit problem structure and carry out online decision-making as the agent acts and observes given its prior beliefs. We are currently investigating ways to solve I-DIDs approximately with provable bounds on the solution quality. Acknowledgment: We thank Piotr Gmytrasiewicz for some useful discussions related to this work. The first author would like to acknowledge the support of a UGARF grant.
Graphical Models for Online Solutions to Interactive POMDPs ABSTRACT We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness. 1. INTRODUCTION Interactive partially observable Markov decision processes (IPOMDPs) [9] provide a framework for sequential decision-making in partially observable multiagent environments. They generalize POMDPs [13] to multiagent settings by including the other agents' computable models in the state space along with the states of the physical environment. The models encompass all information influencing the agents' behaviors, including their preferences, capabilities, and beliefs, and are thus analogous to types in Bayesian games [11]. I-POMDPs adopt a subjective approach to understanding strategic behavior, rooted in a decision-theoretic framework that takes a decision-maker's perspective in the interaction. In [15], Polich and Gmytrasiewicz introduced interactive dynamic influence diagrams (I-DIDs) as the computational representations of I-POMDPs. I-DIDs generalize DIDs [12], which may be viewed as computational counterparts of POMDPs, to multiagents settings in the same way that I-POMDPs generalize POMDPs. I-DIDs contribute to a growing line of work [19] that includes multi-agent influence diagrams (MAIDs) [14], and more recently, networks of influence diagrams (NIDs) [8]. These formalisms seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. MAIDs provide an alternative to normal and extensive game forms using a graphical formalism to represent games of imperfect information with a decision node for each agent's actions and chance nodes capturing the agent's private information. MAIDs objectively analyze the game, efficiently computing the Nash equilibrium profile by exploiting the independence structure. NIDs extend MAIDs to include agents' uncertainty over the game being played and over models of the other agents. Each model is a MAID and the network of MAIDs is collapsed, bottom up, into a single MAID for computing the equilibrium of the game keeping in mind the different models of each agent. Graphical formalisms such as MAIDs and NIDs open up a promising area of research that aims to represent multiagent interactions more transparently. However, MAIDs provide an analysis of the game from an external viewpoint and the applicability of both is limited to static single play games. Matters are more complex when we consider interactions that are extended over time, where predictions about others' future actions must be made using models that change as the agents act and observe. I-DIDs address this gap by allowing the representation of other agents' models as the values of a special model node. Both, other agents' models and the original agent's beliefs over these models are updated over time using special-purpose implementations. In this paper, we improve on the previous preliminary representation of the I-DID shown in [15] by using the insight that the static I-ID is a type of NID. Thus, we may utilize NID-specific language constructs such as multiplexers to represent the model node, and subsequently the I-ID, more transparently. Furthermore, we clarify the semantics of the special purpose "policy link" introduced in the representation of I-DID by [15], and show that it could be replaced by traditional dependency links. In the previous representation of the I-DID, the update of the agent's belief over the models of others as the agents act and receive observations was denoted using a special link called the "model update link" that connected the model nodes over time. We explicate the semantics of this link by showing how it can be implemented using the traditional dependency links between the chance nodes that constitute the model nodes. The net result is a representation of I-DID that is significantly more 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS 2. BACKGROUND: FINITELY NESTED IPOMDPS Interactive POMDPs generalize POMDPs to multiagent settings by including other agents' models as part of the state space [9]. Since other agents may also reason about others, the interactive state space is strategically nested; it contains beliefs about other agents' models and their beliefs about others. For simplicity of presentation we consider an agent, i, that is interacting with one other agent, j. A finitely nested I-POMDP of agent i with a strategy level l is defined as the tuple: where: • ISi, l denotes a set of interactive states defined as, ISi, l = S × Mj, l − 1, where Mj, l − 1 = {19j, l − 1 ∪ SMj}, for l ≥ 1, and ISi,0 = S, where S is the set of states of the physical environment. 19j, l − 1 is the set of computable intentional models of agent j: θj, l − 1 = ~ bj, l − 1, ˆθj ~ where the frame, ˆθj = ~ A, Ωj, Tj, Oj, Rj, OCj ~. Here, j is Bayes rational and OCj is j's optimality criterion. SMj is the set of subintentional models of j. Simple examples of subintentional models include a no-information model [10] and a fictitious play model [6], both of which are history independent. We give a recursive bottom-up construction of the interactive state space below. Similar formulations of nested spaces have appeared in [1, 3]. • A = Ai × Aj is the set of joint actions of all agents in the environment; • Ti: S × A × S → [0, 1], describes the effect of the joint actions on the physical states of the environment; • Ωi is the set of observations of agent i; • Oi: S × A × Ωi → [0, 1] gives the likelihood of the observations given the physical state and joint action; • Ri: ISi × A → R describes agent i's preferences over its interactive states. Usually only the physical states will matter. Agent i's policy is the mapping, Ω ∗ i → Δ (Ai), where Ω ∗ i is the set of all observation histories of agent i. Since belief over the interactive states forms a sufficient statistic [9], the policy can also be represented as a mapping from the set of all beliefs of agent i to a distribution over its actions, Δ (ISi) → Δ (Ai). 2.1 Belief Update Analogous to POMDPs, an agent within the I-POMDP framework updates its belief as it acts and observes. However, there are two differences that complicate the belief update in multiagent settings when compared to single agent ones. First, since the state of the physical environment depends on the actions of both agents, i's prediction of how the physical state changes has to be made based on its prediction of j's actions. Second, changes in j's models have to be included in i's belief update. Specifically, if j is intentional then an update of j's beliefs due to its action and observation has to be included. In other words, i has to update its belief based on its prediction of what j would observe and how j would update its belief. If j's model is subintentional, then j's probable observations are appended to the observation history contained in the model. Formally, we have: Pr (ist | at − 1 i, bt − 1 is an abbreviation for the belief update. For a version of the belief update when j's model is subintentional, see [9]. If agent j is also modeled as an I-POMDP, then i's belief update invokes j's belief update (via the term SEθtj (bt − 1 j, otj)), which in turn could invoke i's belief update and so on. This recursion in belief nesting bottoms out at the 0th level. At this level, the belief update of the agent reduces to a POMDP belief update. 1 For illustrations of the belief update, additional details on I-POMDPs, and how they compare with other multiagent frameworks, see [9]. 2.2 Value Iteration Each belief state in a finitely nested I-POMDP has an associated value reflecting the maximum payoff the agent can expect in this belief state: where, ERi (is, ai) = aj Ri (is, ai, aj) Pr (aj | mj, l − 1) (since is = (s, mj, l − 1)). Eq. 2 is a basis for value iteration in I-POMDPs. Agent i's optimal action, a ∗ i, for the case of finite horizon with discounting, is an element of the set of optimal actions for the belief state, OPT (θi), defined as: 3. INTERACTIVE INFLUENCE DIAGRAMS 3.1 Syntax 3.2 Solution 816 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. INTERACTIVE DYNAMIC INFLUENCE DIAGRAMS 4.1 Syntax The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 817 4.2 Solution 5. EXAMPLE APPLICATIONS 5.1 Followership-Leadership in the Multiagent Tiger Problem 818 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.2 Altruism and Reciprocity in the Public Good Problem The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 819 5.3 Strategies in Two-Player Poker 820 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 6. DISCUSSION We showed how DIDs may be extended to I-DIDs that enable online sequential decision-making in uncertain multiagent settings. Our graphical representation of I-DIDs improves on the previous Figure 12: A level 1 agent i's three step policy in the Poker problem. i starts by believing that j is equally likely to be aggressive or conservative and could have any card in its hand with equal probability. work significantly by being more transparent, semantically clear, and capable of being solved using standard algorithms that target DIDs. I-DIDs extend NIDs to allow sequential decision-making over multiple time steps in the presence of other interacting agents. I-DIDs may be seen as concise graphical representations for IPOMDPs providing a way to exploit problem structure and carry out online decision-making as the agent acts and observes given its prior beliefs. We are currently investigating ways to solve I-DIDs approximately with provable bounds on the solution quality. Acknowledgment: We thank Piotr Gmytrasiewicz for some useful discussions related to this work. The first author would like to acknowledge the support of a UGARF grant.
Graphical Models for Online Solutions to Interactive POMDPs ABSTRACT We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness. 1. INTRODUCTION Interactive partially observable Markov decision processes (IPOMDPs) [9] provide a framework for sequential decision-making in partially observable multiagent environments. They generalize POMDPs [13] to multiagent settings by including the other agents' computable models in the state space along with the states of the physical environment. The models encompass all information influencing the agents' behaviors, including their preferences, capabilities, and beliefs, and are thus analogous to types in Bayesian games [11]. In [15], Polich and Gmytrasiewicz introduced interactive dynamic influence diagrams (I-DIDs) as the computational representations of I-POMDPs. I-DIDs generalize DIDs [12], which may be viewed as computational counterparts of POMDPs, to multiagents settings in the same way that I-POMDPs generalize POMDPs. I-DIDs contribute to a growing line of work [19] that includes multi-agent influence diagrams (MAIDs) [14], and more recently, networks of influence diagrams (NIDs) [8]. These formalisms seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. MAIDs provide an alternative to normal and extensive game forms using a graphical formalism to represent games of imperfect information with a decision node for each agent's actions and chance nodes capturing the agent's private information. MAIDs objectively analyze the game, efficiently computing the Nash equilibrium profile by exploiting the independence structure. NIDs extend MAIDs to include agents' uncertainty over the game being played and over models of the other agents. Each model is a MAID and the network of MAIDs is collapsed, bottom up, into a single MAID for computing the equilibrium of the game keeping in mind the different models of each agent. Graphical formalisms such as MAIDs and NIDs open up a promising area of research that aims to represent multiagent interactions more transparently. Matters are more complex when we consider interactions that are extended over time, where predictions about others' future actions must be made using models that change as the agents act and observe. I-DIDs address this gap by allowing the representation of other agents' models as the values of a special model node. Both, other agents' models and the original agent's beliefs over these models are updated over time using special-purpose implementations. In this paper, we improve on the previous preliminary representation of the I-DID shown in [15] by using the insight that the static I-ID is a type of NID. Thus, we may utilize NID-specific language constructs such as multiplexers to represent the model node, and subsequently the I-ID, more transparently. Furthermore, we clarify the semantics of the special purpose "policy link" introduced in the representation of I-DID by [15], and show that it could be replaced by traditional dependency links. In the previous representation of the I-DID, the update of the agent's belief over the models of others as the agents act and receive observations was denoted using a special link called the "model update link" that connected the model nodes over time. We explicate the semantics of this link by showing how it can be implemented using the traditional dependency links between the chance nodes that constitute the model nodes. The net result is a representation of I-DID that is significantly more 2. BACKGROUND: FINITELY NESTED IPOMDPS Interactive POMDPs generalize POMDPs to multiagent settings by including other agents' models as part of the state space [9]. Since other agents may also reason about others, the interactive state space is strategically nested; it contains beliefs about other agents' models and their beliefs about others. For simplicity of presentation we consider an agent, i, that is interacting with one other agent, j. A finitely nested I-POMDP of agent i with a strategy level l is defined as the tuple: 19j, l − 1 is the set of computable intentional models of agent j: θj, l − 1 = ~ bj, l − 1, ˆθj ~ where the frame, ˆθj = ~ A, Ωj, Tj, Oj, Rj, OCj ~. SMj is the set of subintentional models of j. Simple examples of subintentional models include a no-information model [10] and a fictitious play model [6], both of which are history independent. We give a recursive bottom-up construction of the interactive state space below. Similar formulations of nested spaces have appeared in [1, 3]. Usually only the physical states will matter. Agent i's policy is the mapping, Ω ∗ i → Δ (Ai), where Ω ∗ i is the set of all observation histories of agent i. Since belief over the interactive states forms a sufficient statistic [9], the policy can also be represented as a mapping from the set of all beliefs of agent i to a distribution over its actions, Δ (ISi) → Δ (Ai). 2.1 Belief Update Analogous to POMDPs, an agent within the I-POMDP framework updates its belief as it acts and observes. However, there are two differences that complicate the belief update in multiagent settings when compared to single agent ones. First, since the state of the physical environment depends on the actions of both agents, i's prediction of how the physical state changes has to be made based on its prediction of j's actions. Second, changes in j's models have to be included in i's belief update. Specifically, if j is intentional then an update of j's beliefs due to its action and observation has to be included. In other words, i has to update its belief based on its prediction of what j would observe and how j would update its belief. If j's model is subintentional, then j's probable observations are appended to the observation history contained in the model. is an abbreviation for the belief update. For a version of the belief update when j's model is subintentional, see [9]. If agent j is also modeled as an I-POMDP, then i's belief update invokes j's belief update (via the term SEθtj (bt − 1 j, otj)), which in turn could invoke i's belief update and so on. This recursion in belief nesting bottoms out at the 0th level. At this level, the belief update of the agent reduces to a POMDP belief update. 1 For illustrations of the belief update, additional details on I-POMDPs, and how they compare with other multiagent frameworks, see [9]. 2.2 Value Iteration Each belief state in a finitely nested I-POMDP has an associated value reflecting the maximum payoff the agent can expect in this belief state: Eq. 2 is a basis for value iteration in I-POMDPs. Agent i's optimal action, a ∗ i, for the case of finite horizon with discounting, is an element of the set of optimal actions for the belief state, OPT (θi), defined as: 6. DISCUSSION We showed how DIDs may be extended to I-DIDs that enable online sequential decision-making in uncertain multiagent settings. Our graphical representation of I-DIDs improves on the previous Figure 12: A level 1 agent i's three step policy in the Poker problem. work significantly by being more transparent, semantically clear, and capable of being solved using standard algorithms that target DIDs. I-DIDs extend NIDs to allow sequential decision-making over multiple time steps in the presence of other interacting agents. I-DIDs may be seen as concise graphical representations for IPOMDPs providing a way to exploit problem structure and carry out online decision-making as the agent acts and observes given its prior beliefs. Acknowledgment: We thank Piotr Gmytrasiewicz for some useful discussions related to this work.
I-59
An Agent-Based Approach for Privacy-Preserving Recommender Systems
Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach.
[ "recommend system", "privaci", "inform filter", "privaci-preserv recommend system", "multi-agent system technolog", "inform search", "retriev-inform filter", "distribut artifici intellig-multiag system", "secur multi-parti comput", "trust softwar", "java secur model", "learn-base approach", "featur-base approach", "multi-agent system", "trust" ]
[ "P", "P", "P", "M", "M", "M", "M", "M", "U", "U", "U", "M", "M", "M", "U" ]
An Agent-Based Approach for Privacy-Preserving Recommender Systems Richard Cissée DAI-Labor, TU Berlin Franklinstrasse 28/29 10587 Berlin richard.cissee@dai-labor.de Sahin Albayrak DAI-Labor, TU Berlin Franklinstrasse 28/29 10587 Berlin sahin.albayrak@dai-labor.de ABSTRACT Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval-Information Filtering; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent Systems General Terms Management, Security, Human Factors, Standardization 1. INTRODUCTION Information Filtering (IF) systems aim at countering information overload by extracting information that is relevant for a given user out of a large body of information available via an information provider. In contrast to Information Retrieval (IR) systems, where relevant information is extracted based on search queries, IF architectures generate personalized information based on user profiles containing, for each given user, personal data, preferences, and rated items. The provided body of information is usually structured and collected in provider profiles. Filtering techniques operate on these profiles in order to generate recommendations of items that are probably relevant for a given user, or in order to determine users with similar interests, or both. Depending on the respective goal, the resulting systems constitute Recommender Systems [5], Matchmaker Systems [10], or a combination thereof. The aspect of privacy is an essential issue in all IF systems: Generating personalized information obviously requires the use of personal data. According to surveys indicating major privacy concerns of users in the context of Recommender Systems and e-commerce in general [23], users can be expected to be less reluctant to provide personal information if they trust the system to be privacy-preserving with regard to personal data. Similar considerations also apply to the information provider, who may want to control the dissemination of the provided information, and to the provider of the filtering techniques, who may not want the details of the utilized filtering algorithms to become common knowledge. A privacy-preserving IF system should therefore balance these requirements and protect the privacy of all parties involved in a multilateral way, while addressing general requirements regarding performance, security and quality of the recommendations as well. As described in the following section, there are several approaches with similar goals, but none of these provide a generic approach in which the privacy of all parties is preserved. We have developed an agent-based approach for privacypreserving IF which has been utilized for realizing a combined Recommender/Matchmaker System as part of an application supporting users in planning entertainment-related activities. In this paper, we focus on the Recommender System functionality. Our approach is based on Multi-Agent System (MAS) technology because fundamental features of agents such as autonomy, adaptability and the ability to communicate are essential requirements of our approach. In other words, the realized approach does not merely constitute a solution for privacy-preserving IF within a MAS context, but rather utilizes a MAS architecture in order to realize a solution for privacy-preserving IF, which could not be realized easily otherwise. The paper is structured as follows: Section 2 describes related work. Section 3 describes the general ideas of our approach. In Section 4, we describe essential details of the 319 978-81-904262-7-5 (RPS) c 2007 IFAAMAS modules of our approach and their implementation. In Section 5, we evaluate the approach, mainly via the realized application. Section 6 concludes the paper with an outlook and outlines further work. 2. RELATED WORK There is a large amount of work in related areas, such as Private Information Retrieval [7], Privacy-Preserving Data Mining [2], and other privacy-preserving protocols [4, 16], most of which is based on Secure Multi-Party Computation [27]. We have ruled out Secure Multi-Party Computation approaches mainly because of their complexity, and because the algorithm that is computed securely is not considered to be private in these approaches. Various enforcement mechanisms have been suggested that are applicable in the context of privacy-preserving Information Filtering, such as enterprise privacy policies [17] or hippocratic databases [1], both of which annotate user data with additional meta-information specifying how the data is to be handled on the provider side. These approaches ultimately assume that the provider actually intends to protect the privacy of the user data, and offer support for this task, but they are not intended to prevent the provider from acting in a malicious manner. Trusted computing, as specified by the Trusted Computing Group, aims at realizing trusted systems by increasing the security of open systems to a level comparable with the level of security that is achievable in closed systems. It is based on a combination of tamper-proof hardware and various software components. Some example applications, including peer-to-peer networks, distributed firewalls, and distributed computing in general, are listed in [13]. There are some approaches for privacy-preserving Recommender Systems based on distributed collaborative filtering, in which recommendations are generated via a public model aggregating the distributed user profiles without containing explicit information about user profiles themselves. This is achieved via Secure Multi-Party Computation [6], or via random perturbation of the user data [20]. In [19], various approaches are integrated within a single architecture. In [10], an agent-based approach is described in which user agents representing similar users are discovered via a transitive traversal of user agents. Privacy is preserved through pseudonymous interaction between the agents and through adding obfuscating data to personal information. More recent related approaches are described in [18]. In [3], an agent-based architecture for privacy-preserving demographic filtering is described which may be generalized in order to support other kinds of filtering techniques. While in some aspects similar to our approach, this architecture addresses at least two aspects inadequately, namely the protection of the filter against manipulation attempts, and the prevention of collusions between the filter and the provider. 3. PRIVACY-PRESERVING INFORMATION FILTERING We identify three main abstract entities participating in an information filtering process within a distributed system: A user entity, a provider entity, and a filter entity. Whereas in some applications the provider and filter entities explicitly trust each other, because they are deployed by the same party, our solution is applicable more generically because it does not require any trust between the main abstract entities. In this paper, we focus on aspects related to the information filtering process itself, and omit all aspects related to information collection and processing, i.e. the stages in which profiles are generated and maintained, mainly because these stages are less critical with regard to privacy, as they involve fewer different entities. 3.1 Requirements Our solution aims at meeting the following requirements with regard to privacy: • User Privacy: No linkable information about user profiles should be acquired permanently by any other entity or external party, including other user entities. Single user profile items, however, may be acquired permanently if they are unlinkable, i.e. if they cannot be attributed to a specific user or linked to other user profile items. Temporary acquisition of private information is permitted as well. Sets of recommendations may be acquired permanently by the provider, but they should not be linkable to a specific user. These concessions simplify the resulting protocol and allow the provider to obtain recommendations and single unlinkable user profile items, and thus to determine frequently requested information and optimize the offered information accordingly. • Provider Privacy: No information about provider profiles, with the exception of the recommendations, should be acquired permanently by other entities or external parties. Again, temporary acquisition of private information is permitted. Additionally, the propagation of provider information is entirely under the control of the provider. Thus, the provider is enabled to prevent misuse such as the automatic large-scale extraction of information. • Filter Privacy: Details of the algorithms applied by the filtering techniques should not be acquired permanently by any other entity or external party. General information about the algorithm may be provided by the filter entity in order to help other entities to reach a decision on whether to apply the respective filtering technique. In addition, general requirements regarding the quality of the recommendations as well as security aspects, performance and broadness of the resulting system have to be addressed as well. While minor trade-offs may be acceptable, the resulting system should reach a level similar to regular Recommender Systems with regard to these requirements. 3.2 Outline of the Solution The basic idea for realizing a protocol fulfilling these privacy-related requirements in Recommender Systems is implied by allowing the temporary acquisition of private information (see [8] for the original approach): User and provider entity both propagate the respective profile data to the filter entity. The filter entity provides the recommendations, and subsequently deletes all private information, thus fulfilling the requirement regarding permanent acquisition of private information. 320 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The entities whose private information is propagated have to be certain that the respective information is actually acquired temporarily only. Trust in this regard may be established in two main ways: • Trusted Software: The respective entity itself is trusted to remove the respective information as specified. • Trusted Environment: The respective entity operates in an environment that is trusted to control the communication and life cycle of the entity to an extent that the removal of the respective information may be achieved regardless of the attempted actions of the entity itself. Additionally, the environment itself is trusted not to act in a malicious manner (e.g. it is trusted not to acquire and propagate the respective information itself). In both cases, trust may be established in various ways. Reputation-based mechanisms, additional trusted third parties certifying entities or environments, or trusted computing mechanisms may be used. Our approach is based on a trusted environment realized via trusted computing mechanisms, because we see this solution as the most generic and realistic approach. This decision is discussed briefly in Section 5. We are now able to specify the abstract information filtering protocol as shown in Figure 1: The filter entity deploys a Temporary Filter Entity (TFE) operating in a trusted environment. The user entity deploys an additional relay entity operating in the same environment. Through mechanisms provided by this environment, the relay entity is able to control the communication of the TFE, and the provider entity is able to control the communication of both relay entity and the TFE. Thus, it is possible to ensure that the controlled entities are only able to propagate recommendations, but no other private information. In the first stage (steps 1.1 to 1.3 of Figure 1), the relay entity establishes control of the TFE, and thus prevents it from propagating user profile information. User profile data is propagated without participation of the provider entity from the user entity to the TFE via the relay entity. In the second stage (steps 2.1 to 2.3 of Figure 1), the provider entity establishes control of both relay and TFE, and thus prevents them from propagating provider profile information. Provider profile data is propagated from the provider entity to the TFE via the relay entity. In the third stage (steps 3.1 to 3.5 of Figure 1), the TFE returns the recommendations via the relay entity, and the controlled entities are terminated. Taken together, these steps ensure that all private information is acquired temporarily only by the other main entities. The problems of determining acceptable queries on the provider profile and ensuring unlinkability of the recommendations are discussed in the following section. Our approach requires each entity in the distributed architecture to have the following five main abilities: The ability to perform certain well-defined tasks (such as carrying out a filtering process) with a high degree of autonomy, i.e. largely independent of other entities (e.g. because the respective entity is not able to communicate in an unrestricted manner), the ability to be deployable dynamically in a well-defined environment, the ability to communicate with other entities, the ability to achieve protection against external manipulation attempts, and the ability to control and restrict the communication of other entities. Figure 1: The abstract privacy-preserving information filtering protocol. All communication across the environments indicated by dashed lines is prevented with the exception of communication with the controlling entity. MAS architectures are an ideal solution for realizing a distributed system characterized by these features, because they provide agents constituting entities that are actually characterized by autonomy, mobility and the ability to communicate [26], as well as agent platforms as environments providing means to realize the security of agents. In this context, the issue of malicious hosts, i.e. hosts attacking agents, has to be addressed explicitly. Furthermore, existing MAS architectures generally do not allow agents to control the communication of other agents. It is possible, however, to expand a MAS architecture and to provide designated agents with this ability. For these reasons, our solution is based on a FIPA[11]-compliant MAS architecture. The entities introduced above are mapped directly to agents, and the trusted environment in which they exist is realized in the form of agent platforms. In addition to the MAS architecture itself, which is assumed as given, our solution consists of the following five main modules: • The Controller Module described in Section 4.1 provides functionality for controlling the communication capabilities of agents. • The Transparent Persistence Module facilitates the use of different data storage mechanisms, and provides a uniform interface for accessing persistent information, which may be utilized for monitoring critical interactions involving potentially private information e.g. as part of queries. Its description is outside the scope of this paper. • The Recommender Module, details of which are described in Section 4.2, provides Recommender System functionality. • The Matchmaker Module provides Matchmaker System functionality. It additionally utilizes social aspects of MAS technology. Its description is outside the scope of this paper. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 321 • Finally, a separate module described in Section 4.4 provides Exemplary Filtering Techniques in order to show that various restrictions imposed on filtering techniques by our approach may actually be fulfilled. The trusted environment introduced above encompasses the MAS architecture itself and the Controller Module, which have to be trusted to act in a non-malicious manner in order to rule out the possibility of malicious hosts. 4. MAIN MODULES AND IMPLEMENTATION In this section, we describe the main modules of our approach, and outline the implementation. While we have chosen a specific architecture for the implementation, the specification of the module is applicable to any FIPA-compliant MAS architecture. A module basically encompasses ontologies, functionality provided by agents via agent services, and internal functionality. Throughout this paper, {m}KX denotes a message m encrypted via a non-specified symmetric encryption scheme with a secret key KX used for encryption and decryption which is initially known only to participant X. A key KXY is a key shared by participants X and Y . A cryptographic hash function is used at various points of the protocol, i.e. a function returning a hash value h(x) for given data x that is both preimage-resistant and collision-resistant1 . We denote a set of hash values for a data set X = {x1, . . , xn} as H(X) = {h(x1), . . , h(xn)}, whereas h(X) denotes a single hash value of a data set. 4.1 Controller Module As noted above, the ability to control the communication of agents is generally not a feature of existing MAS architectures2 but at the same time a central requirement of our approach for privacy-preserving Information Filtering. The required functionality cannot be realized based on regular agent services or components, because an agent on a platform is usually not allowed to interfere with the actions of other agents in any way. Therefore, we add additional infrastructure providing the required functionality to the MAS architecture itself, resulting in an agent environment with extended functionality and responsibilities. Controlling the communication capabilities of an agent is realized by restricting via rules, in a manner similar to a firewall, but with the consent of the respective agent, its incoming and outgoing communication to specific platforms or agents on external platforms as well as other possible communication channels, such as the file system. Consent is required because otherwise the overall security would be compromised, as attackers could arbitrarily block various communication channels. Our approach does not require controlling the communication between agents on the same platform, and therefore this aspect is not addressed. Consequently, all rules addressing communication capabilities have to be enforced across entire platforms, because otherwise a controlled agent could just use a non-controlled agent 1 In the implementation, we have used the Advanced Encryption Standard (AES) as the symmetric encryption scheme and SHA-1 as the cryptographic hash function. 2 A recent survey on agent environments [24] concludes that aspects related to agent environments are often neglected, and does not indicate any existing work in this particular area. on the same platform as a relay for communicating with agents residing on external platforms. Various agent services provide functionality for adding and revoking control of platforms, including functionality required in complex scenarios where controlled agents in turn control further platforms. The implementation of the actual control mechanism depends on the actual MAS architecture. In our implementation, we have utilized methods provided via the Java Security Manager as part of the Java security model. Thus, the supervisor agent is enabled to define custom security policies, thereby granting or denying other agents access to resources required for communication with other agents as well as communication in general, such as files or sockets for TCP/IP-based communication. 4.2 Recommender Module The Recommender Module is mainly responsible for carrying out information filtering processes, according to the protocol described in Table 1. The participating entities are realized as agents, and the interactions as agent services. We assume that mechanisms for secure agent communication are available within the respective MAS architecture. Two issues have to be addressed in this module: The relevant parts of the provider profile have to be retrieved without compromising the user``s privacy, and the recommendations have to be propagated in a privacy-preserving way. Our solution is based on a threat model in which no main abstract entity may safely assume any other abstract entity to act in an honest manner: Each entity has to assume that other entities may attempt to obtain private information, either while following the specified protocol or even by deviating from the protocol. According to [15], we classify the former case as honest-but-curious behavior (as an example, the TFE may propagate recommendations as specified, but may additionally attempt to propagate private information), and the latter case as malicious behavior (as an example, the filter may attempt to propagate private information instead of the recommendations). 4.2.1 Retrieving the Provider Profile As outlined above, the relay agent relays data between the TFE agent and the provider agent. These agents are not allowed to communicate directly, because the TFE agent cannot be assumed to act in an honest manner. Unlike the user profile, which is usually rather small, the provider profile is often too voluminous to be propagated as a whole efficiently. A typical example is a user profile containing ratings of about 100 movies, while the provider profile contains some 10,000 movies. Retrieving only the relevant part of the provider profile, however, is problematic because it has to be done without leaking sensitive information about the user profile. Therefore, the relay agent has to analyze all queries on the provider profile, and reject potentially critical queries, such as queries containing a set of user profile items. Because the propagation of single unlinkable user profile items is assumed to be uncritical, we extend the information filtering protocol as follows: The relevant parts of the provider profile are retrieved based on single anonymous interactions between the relay and the provider. If the MAS architecture used for the implementation does not provide an infrastructure for anonymous agent communication, this feature has to be provided explicitly: The most straightforward way is to use additional relay agents deployed via 322 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: The basic information filtering protocol with participants U = user agent, P = provider agent, F = TFE agent, R = relay agent, based on the abstract protocol shown in Figure 1. UP denotes the user profile with UP = {up1, . . , upn}, PP denotes the provider profile, and REC denotes the set of recommendations with REC = {rec1, . . , recm}. Phase. Sender → Message or Action Step Receiver 1.1 R → F establish control 1.2 U → R UP 1.3 R → F UP 2.1 P → R, F establish control 2.2 P → R PP 2.3 R → F PP 3.1 F → R REC 3.2 R → P REC 3.3 P → U REC 3.4 R → F terminate F 3.5 P → R terminate R the main relay agent and used once for a single anonymous interaction. Obviously, unlinkability is only achieved if multiple instances of the protocol are executed simultaneously between the provider and different users. Because agents on controlled platforms are unable to communicate anonymously with the respective controlling agent, control has to be established after the anonymous interactions have been completed. To prevent the uncontrolled relay agents from propagating provider profile data, the respective data is encrypted and the key is provided only after control has been established. Therefore, the second phase of the protocol described in Table 1 is replaced as described in Table 2. Additionally, the relay agent may allow other interactions as long as no user profile items are used within the queries. In this case, the relay agent has to ensure that the provider does not obtain any information exceeding the information deducible via the recommendations themselves. The clusterbased filtering technique described in Section 4.3 is an example for a filtering technique operating in this manner. 4.2.2 Recommendation Propagation The propagation of the recommendations is even more problematic mainly because more participants are involved: Recommendations have to be propagated from the TFE agent via the relay and provider agent to the user agent. No participant should be able to alter the recommendations or use them for the propagation of private information. Therefore, every participant in this chain has to obtain and verify the recommendations in unencrypted form prior to the next agent in the chain, i.e. the relay agent has to verify the recommendations before the provider obtains them, and so on. Therefore, the final phase of the protocol described in Table 1 is replaced as described in Table 3. It basically consists of two parts (Step 3.1 to 3.4, and Step 3.5 to Step 3.8), each of which provide a solution for a problem related to the prisoners'' problem [22], in which two participants (the prisoners) intend to exchange a message via a third, untrusted participant (the warden) who may read the message but must not be able to alter it in an undetectable manner. There are various solutions for protocols addressing the prisoners'' probTable 2: The updated second stage of the information filtering protocol with definitions as above. PPq is the part of the provider profile PP returned as the result of the query q. Phase. Sender → Message or Action Step Receiver repeat 2.1 to 2.3 ∀ up ∈ UP: 2.1 F → R q(up) (a query based on up) 2.2 R anon → P q(up) (R remains anonymous) 2.3 P → R anon {PPq(up)}KP 2.4 P → R, F establish control 2.5 P → R KP 2.6 R → F PPq(UP ) Table 3: The updated final stage of the information filtering protocol with definitions as above. Phase. Sender → Message or Action Step Receiver 3.1 F → R REC, {H(REC)}KPF 3.2 R → P h(KR), {{H(REC)}KPF }KR 3.3 P → R KP F 3.4 R → P KR repeat 3.5 ∀ rec ∈ REC: 3.5 R → P {rec}KURrec repeat 3.6 ∀ rec ∈ REC: 3.6 P → U h(KPrec ), {{rec}KURrec }KPrec repeat 3.7 to 3.8 ∀ rec ∈ REC: 3.7 U → P KURrec 3.8 P → U KPrec 3.9 U → F terminate F 3.10 P → U terminate U lem. The more obvious of these, however, such as protocols based on the use of digital signatures, introduce additional threats e.g. via the possibility of additional subliminal channels [22]. In order to minimize the risk of possible threats, we have decided to use a protocol that only requires a symmetric encryption scheme. The first part of the final phase is carried out as follows: In order to prevent the relay from altering recommendations, they are propagated by the filter together with an encrypted hash in Step 3.1. Thus, the relay is able to verify the recommendations before they are propagated further. The relay, however, may suspect the data propagated as the encrypted hash to contain private information instead of the actual hash value. Therefore, the encrypted hash is encrypted again and propagated together with a hash on the respective key in Step 3.2. In Step 3.3, the key KP F is revealed to the relay, allowing the relay to validate the encrypted hash. In Step 3.4, the key KR is revealed to the provider, allowing the provider to decrypt the data received in Step 3.2 and thus to obtain H(REC). Propagating the hash of the key KR prevents the relay from altering the recommendations to REC after Step 3.3, which would be undetectable otherwise because the relay could choose a key KR so that {{H(REC)}KPF }KR = {{H(REC )}KPF }KR . The encryption scheme used for encrypting the hash has to be secure against known-plaintext attacks, because otherwise the relay may be able to obtain KP F after Step 3.1 and subsequently alter the recommendations in an undetectable The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 323 way. Additionally, the encryption scheme must not be commutative for similar reasons. The remaining protocol steps are interactions between relay, provider and user agent. The interactions of Step 3.5 to Step 3.8 ensure, via the same mechanisms used in Step 3.1 to 3.4, that the provider is able to analyze the recommendations before the user obtains them, but at the same time prevent the provider from altering the recommendations. Additionally, the recommendations are not processed at once, but rather one at a time, to prevent the provider from withholding all recommendations. Upon completion of the protocol, both user and provider have obtained a set of recommendations. If the user wants these recommendations to be unlinkable to himself, the user agent has to carry out the entire protocol anonymously. Again, the most straightforward way to achieve this is to use additional relay agents deployed via the user agent which are used once for a single information filtering process. 4.3 Exemplary Filtering Techniques The filtering technique applied by the TFE agent cannot be chosen freely: All collaboration-based approaches, such as collaborative filtering techniques based on the profiles of a set of users, are not applicable because the provider profile does not contain user profile data (unless this data has been collected externally). Instead, these approaches are realized via the Matchmaker Module, which is outside the scope of this paper. Learning-based approaches are not applicable because the TFE agent cannot propagate any acquired data to the filter, which effectively means that the filter is incapable of learning. Filtering techniques that are actually applicable are feature-based approaches, such as content-based filtering (in which profile items are compared via their attributes) and knowledge-based filtering (in which domain-specific knowledge is applied in order to match user and provider profile items). An overview of different classes and hybrid combinations of filtering techniques is given in [5]. We have implemented two generic content-based filtering approaches that are applicable within our approach: A direct content-based filtering technique based on the class of item-based top-N recommendation algorithms [9] is used in cases where the user profile contains items that are also contained in the provider profile. In a preprocessing stage, i.e. prior to the actual information filtering processes, a model is generated containing the k most similar items for each provider profile item. While computationally rather complex, this approach is feasible because it has to be done only once, and it is carried out in a privacy-preserving way via interactions between the provider agent and a TFE agent. The resulting model is stored by the provider agent and can be seen as an additional part of the provider profile. In the actual information filtering process, the k most similar items are retrieved for each single user profile item via queries on the model (as described in Section 4.2.1, this is possible in a privacy-preserving way via anonymous communication). Recommendations are generated by selecting the n most frequent items from the result sets that are not already contained within the user profile. As an alternative approach applicable when the user profile contains information in addition to provider profile items, we provide a cluster-based approach in which provider profile items are clustered in a preprocessing stage via an agglomerative hierarchical clustering approach. Each cluster is represented by a centroid item, and the cluster elements are either sub-clusters or, on the lowest level, the items themselves. In the information filtering stage, the relevant items are retrieved by descending through the cluster hierarchy in the following manner: The cluster items of the highest level are retrieved independent of the user profile. By comparing these items with the user profile data, the most relevant sub-clusters are determined and retrieved in a subsequent iteration. This process is repeated until the lowest level is reached, which contains the items themselves as recommendations. Throughout the process, user profile items are never propagated to the provider as such. The information deducible about the user profile does not exceed the information deducible via the recommendations themselves (because essentially only a chain of cluster centroids leading to the recommendations is retrieved), and therefore it is not regarded as privacy-critical. 4.4 Implementation We have implemented the approach for privacy-preserving IF based on JIAC IV [12], a FIPA-compliant MAS architecture. JIAC IV integrates fundamental aspects of autonomous agents regarding pro-activeness, intelligence, communication capabilities and mobility by providing a scalable component-based architecture. Additionally, JIAC IV offers components realizing management and security functionality, and provides a methodology for Agent-Oriented Software Engineering. JIAC IV stands out among MAS architectures as the only security-certified architecture, since it has been certified by the German Federal Office for Information Security according to the EAL3 of the Common Criteria for Information Technology Security standard [14]. JIAC IV offers several security features in the areas of access control for agent services, secure communication between agents, and low-level security based on Java security policies [21], and thus provides all security-related functionality required for our approach. We have extended the JIAC IV architecture by adding the mechanisms for communication control described in Section 4.1. Regarding the issue of malicious hosts, we currently assume all providers of agent platforms to be trusted. We are additionally developing a solution that is actually based on a trusted computing infrastructure. 5. EVALUATION For the evaluation of our approach, we have examined whether and to which extent the requirements (mainly regarding privacy, performance, and quality) are actually met. Privacy aspects are directly addressed by the modules and protocols described above and therefore not evaluated further here. Performance is a critical issue, mainly because of the overhead caused by creating additional agents and agent platforms for controlling communication, and by the additional interactions within the Recommender Module. Overall, a single information filtering process takes about ten times longer than a non-privacy-preserving information filtering process leading to the same results, which is a considerable overhead but still acceptable under certain conditions, as described in the following section. 5.1 The Smart Event Assistant As a proof of concept, and in order to evaluate performance and quality under real-life conditions, we have ap324 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 2: The Smart Event Assistant, a privacypreserving Recommender System supporting users in planning entertainment-related activities. plied our approach within the Smart Event Assistant, a MAS-based Recommender System which integrates various personalized services for entertainment planning in different German cities, such as a restaurant finder and a movie finder [25]. Additional services, such as a calendar, a routing service and news services complement the information services. An intelligent day planner integrates all functionality by providing personalized recommendations for the various information services, based on the user``s preferences and taking into account the location of the user as well as the potential venues. All services are accessible via mobile devices as well3 . Figure 2 shows a screenshot of the intelligent day planner``s result dialog. The Smart Event Assistant is entirely realized as a MAS system providing, among other functionality, various filter agents and different service provider agents, which together with the user agents utilize the functionality provided by our approach. Recommendations are generated in two ways: A push service delivers new recommendations to the user in regular intervals (e.g. once per day) via email or SMS. Because the user is not online during these interactions, they are less critical with regard to performance and the protracted duration of the information filtering process is acceptable in this case. Recommendations generated for the intelligent day planner, however, have to be delivered with very little latency because the process is triggered by the user, who expects to receive results promptly. In this scenario, the overall performance is substantially improved by setting up the relay agent and the TFE agent offline, i.e. prior to the user``s request, and by an efficient retrieval of the relevant 3 The Smart Event Assistant may be accessed online via http://www.smarteventassistant.de. Table 4: Complexity of typical privacy-preserving (PP) vs. non-privacy-preserving (NPP) filtering processes in the realized application. In the nonprivacy-preserving version, an agent retrieves the profiles directly and propagates the result to a provider agent. scenario push day planning version NPP PP NPP PP profile size (retrieved/total amount of items) user 25/25 25/25 provider 125/10,000 500/10,000 elapsed time in filtering process (in seconds) setup n/a 2.2 n/a offline database access 0.2 0.5 0.4 0.4 profile propagation n/a 0.8 n/a 0.3 filtering algorithm 0.2 0.2 0.2 0.2 result propagation 0.1 1.1 0.1 1.1 complete time 0.5 4.8 0.7 2.0 part of the provider profile: Because the user is only interested in items, such as movies, available within a certain time period and related to specific locations, such as screenings at cinemas in a specific city, the relevant part of the provider profile is usually small enough to be propagated entirely. Because these additional parameters are not seen as privacy-critical (as they are not based on the user profile, but rather constitute a short-term information need), the relevant part of the provider profile may be propagated as a whole, with no need for complex interactions. Taken together, these improvements result in a filtering process that takes about three times as long as the respective nonprivacy-preserving filtering process, which we regard as an acceptable trade-off for the increased level of privacy. Table 4 shows the results of the performance evaluation in more detail. In these scenarios, a direct content-based filtering technique similar to the one described in Section 4.3 is applied. Because equivalent filtering techniques have been applied successfully in regular Recommender Systems [9], there are no negative consequences with regard to the quality of the recommendations. 5.2 Alternative Approaches As described in Section 3.2, our solution is based on trusted computing. There are more straightforward ways to realize privacy-preserving IF, e.g. by utilizing a centralized architecture in which the privacy-preserving providerside functionality is realized as trusted software based on trusted computing. However, we consider these approaches to be unsuitable because they are far less generic: Whenever some part of the respective software is patched, upgraded or replaced, the entire system has to be analyzed again in order to determine its trustworthiness, a process that is problematic in itself due to its complexity. In our solution, only a comparatively small part of the overall system is based on trusted computing. Because agent platforms can be utilized for a large variety of tasks, and because we see trusted computing as the most promising approach to realize secure and trusted agent environments, it seems reasonable to assume that these respective mechanisms will be generally available in the future, independent of specific solutions such as the one described here. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 325 6. CONCLUSION & FURTHER WORK We have developed an agent-based approach for privacypreserving Recommender Systems. By utilizing fundamental features of agents such as autonomy, adaptability and the ability to communicate, by extending the capabilities of agent platform managers regarding control of agent communication, by providing a privacy-preserving protocol for information filtering processes, and by utilizing suitable filtering techniques we have been able to realize an approach which actually preserves privacy in Information Filtering architectures in a multilateral way. As a proof of concept, we have used the approach within an application supporting users in planning entertainment-related activities. We envision various areas of future work: To achieve complete user privacy, the protocol should be extended in order to keep the recommendations themselves private as well. Generally, the feedback we have obtained from users of the Smart Event Assistant indicates that most users are indifferent to privacy in the context of entertainment-related personal information. Therefore, we intend to utilize the approach to realize a Recommender System in a more privacysensitive domain, such as health or finance, which would enable us to better evaluate user acceptance. 7. ACKNOWLEDGMENTS We would like to thank our colleagues Andreas Rieger and Nicolas Braun, who co-developed the Smart Event Assistant. The Smart Event Assistant is based on a project funded by the German Federal Ministry of Education and Research under Grant No. 01AK037, and a project funded by the German Federal Ministry of Economics and Labour under Grant No. 01MD506. 8. REFERENCES [1] R. Agrawal, J. Kiernan, R. Srikant, and Y. Xu. Hippocratic databases. In 28th Int``l Conf. on Very Large Databases (VLDB), Hong Kong, 2002. [2] R. Agrawal and R. Srikant. Privacy-preserving data mining. In Proc. of the ACM SIGMOD Conference on Management of Data, pages 439-450. ACM Press, May 2000. [3] E. A¨ımeur, G. Brassard, J. M. Fernandez, and F. S. Mani Onana. Privacy-preserving demographic filtering. In SAC ``06: Proceedings of the 2006 ACM symposium on Applied computing, pages 872-878, New York, NY, USA, 2006. ACM Press. [4] M. Bawa, R. Bayardo, Jr., and R. Agrawal. Privacy-preserving indexing of documents on the network. In Proc. of the 2003 VLDB, 2003. [5] R. Burke. Hybrid recommender systems: Survey and experiments. User Modeling and User-Adapted Interaction, 12(4):331-370, 2002. [6] J. Canny. Collaborative filtering with privacy. In IEEE Symposium on Security and Privacy, pages 45-57, 2002. [7] B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan. Private information retrieval. In IEEE Symposium on Foundations of Computer Science, pages 41-50, 1995. [8] R. Ciss´ee. An architecture for agent-based privacy-preserving information filtering. In Proceedings of the 6th International Workshop on Trust, Privacy, Deception and Fraud in Agent Systems, 2003. [9] M. Deshpande and G. Karypis. Item-based top-N recommendation algorithms. ACM Trans. Inf. Syst., 22(1):143-177, 2004. [10] L. Foner. Political artifacts and personal privacy: The yenta multi-agent distributed matchmaking system. PhD thesis, MIT, 1999. [11] Foundation for Intelligent Physical Agents. FIPA Abstract Architecture Specification, Version L, 2002. [12] S. Fricke, K. Bsufka, J. Keiser, T. Schmidt, R. Sesseler, and S. Albayrak. Agent-based telematic services and telecom applications. Communications of the ACM, 44(4), April 2001. [13] T. Garfinkel, M. Rosenblum, and D. Boneh. Flexible OS support and applications for trusted computing. In Proceedings of HotOS-IX, May 2003. [14] T. Geissler and O. Kroll-Peters. Applying security standards to multi agent systems. In AAMAS Workshop: Safety & Security in Multiagent Systems, 2004. [15] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game. In Proc. of STOC ``87, pages 218-229, New York, NY, USA, 1987. ACM Press. [16] S. Jha, L. Kruger, and P. McDaniel. Privacy preserving clustering. In ESORICS 2005, volume 3679 of LNCS. Springer, 2005. [17] G. Karjoth, M. Schunter, and M. Waidner. The platform for enterprise privacy practices: Privacy-enabled management of customer data. In PET 2002, volume 2482 of LNCS. Springer, 2003. [18] H. Link, J. Saia, T. Lane, and R. A. LaViolette. The impact of social networks on multi-agent recommender systems. In Proc. of the Workshop on Cooperative Multi-Agent Learning (ECML/PKDD ``05), 2005. [19] B. N. Miller, J. A. Konstan, and J. Riedl. PocketLens: Toward a personal recommender system. ACM Trans. Inf. Syst., 22(3):437-476, 2004. [20] H. Polat and W. Du. SVD-based collaborative filtering with privacy. In Proc. of SAC ``05, pages 791-795, New York, NY, USA, 2005. ACM Press. [21] T. Schmidt. Advanced Security Infrastructure for Multi-Agent-Applications in the Telematic Area. PhD thesis, Technische Universit¨at Berlin, 2002. [22] G. J. Simmons. The prisoners'' problem and the subliminal channel. In D. Chaum, editor, Proc. of Crypto ``83, pages 51-67. Plenum Press, 1984. [23] M. Teltzrow and A. Kobsa. Impacts of user privacy preferences on personalized systems: a comparative study. In Designing personalized user experiences in eCommerce, pages 315-332. 2004. [24] D. Weyns, H. Parunak, F. Michel, T. Holvoet, and J. Ferber. Environments for multiagent systems: State-of-the-art and research challenges. In Environments for Multiagent Systems, volume 3477 of LNCS. Springer, 2005. [25] J. Wohltorf, R. Ciss´ee, and A. Rieger. Berlintainment: An agent-based context-aware entertainment planning system. IEEE Communications Magazine, 43(6), 2005. [26] M. Wooldridge and N. R. Jennings. Intelligent agents: Theory and practice. Knowledge Engineering Review, 10(2):115-152, 1995. [27] A. Yao. Protocols for secure computation. In Proc. of IEEE FOGS ``82, pages 160-164, 1982. 326 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
An Agent-Based Approach for Privacy-Preserving Recommender Systems ABSTRACT Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach. 1. INTRODUCTION Information Filtering (IF) systems aim at countering information overload by extracting information that is relevant for a given user out of a large body of information available via an information provider. In contrast to Information Retrieval (IR) systems, where relevant information 10587 Berlin sahin.albayrak@dai-labor.de is extracted based on search queries, IF architectures generate personalized information based on user profiles containing, for each given user, personal data, preferences, and rated items. The provided body of information is usually structured and collected in provider profiles. Filtering techniques operate on these profiles in order to generate recommendations of items that are probably relevant for a given user, or in order to determine users with similar interests, or both. Depending on the respective goal, the resulting systems constitute Recommender Systems [5], Matchmaker Systems [10], or a combination thereof. The aspect of privacy is an essential issue in all IF systems: Generating personalized information obviously requires the use of personal data. According to surveys indicating major privacy concerns of users in the context of Recommender Systems and e-commerce in general [23], users can be expected to be less reluctant to provide personal information if they trust the system to be privacy-preserving with regard to personal data. Similar considerations also apply to the information provider, who may want to control the dissemination of the provided information, and to the provider of the filtering techniques, who may not want the details of the utilized filtering algorithms to become common knowledge. A privacy-preserving IF system should therefore balance these requirements and protect the privacy of all parties involved in a multilateral way, while addressing general requirements regarding performance, security and quality of the recommendations as well. As described in the following section, there are several approaches with similar goals, but none of these provide a generic approach in which the privacy of all parties is preserved. We have developed an agent-based approach for privacypreserving IF which has been utilized for realizing a combined Recommender/Matchmaker System as part of an application supporting users in planning entertainment-related activities. In this paper, we focus on the Recommender System functionality. Our approach is based on Multi-Agent System (MAS) technology because fundamental features of agents such as autonomy, adaptability and the ability to communicate are essential requirements of our approach. In other words, the realized approach does not merely constitute a solution for privacy-preserving IF within a MAS context, but rather utilizes a MAS architecture in order to realize a solution for privacy-preserving IF, which could not be realized easily otherwise. The paper is structured as follows: Section 2 describes related work. Section 3 describes the general ideas of our approach. In Section 4, we describe essential details of the 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS modules of our approach and their implementation. In Section 5, we evaluate the approach, mainly via the realized application. Section 6 concludes the paper with an outlook and outlines further work. 2. RELATED WORK There is a large amount of work in related areas, such as Private Information Retrieval [7], Privacy-Preserving Data Mining [2], and other privacy-preserving protocols [4, 16], most of which is based on Secure Multi-Party Computation [27]. We have ruled out Secure Multi-Party Computation approaches mainly because of their complexity, and because the algorithm that is computed securely is not considered to be private in these approaches. Various enforcement mechanisms have been suggested that are applicable in the context of privacy-preserving Information Filtering, such as enterprise privacy policies [17] or hippocratic databases [1], both of which annotate user data with additional meta-information specifying how the data is to be handled on the provider side. These approaches ultimately assume that the provider actually intends to protect the privacy of the user data, and offer support for this task, but they are not intended to prevent the provider from acting in a malicious manner. Trusted computing, as specified by the Trusted Computing Group, aims at realizing trusted systems by increasing the security of open systems to a level comparable with the level of security that is achievable in closed systems. It is based on a combination of tamper-proof hardware and various software components. Some example applications, including peer-to-peer networks, distributed firewalls, and distributed computing in general, are listed in [13]. There are some approaches for privacy-preserving Recommender Systems based on distributed collaborative filtering, in which recommendations are generated via a public model aggregating the distributed user profiles without containing explicit information about user profiles themselves. This is achieved via Secure Multi-Party Computation [6], or via random perturbation of the user data [20]. In [19], various approaches are integrated within a single architecture. In [10], an agent-based approach is described in which user agents representing similar users are discovered via a transitive traversal of user agents. Privacy is preserved through pseudonymous interaction between the agents and through adding obfuscating data to personal information. More recent related approaches are described in [18]. In [3], an agent-based architecture for privacy-preserving demographic filtering is described which may be generalized in order to support other kinds of filtering techniques. While in some aspects similar to our approach, this architecture addresses at least two aspects inadequately, namely the protection of the filter against manipulation attempts, and the prevention of collusions between the filter and the provider. 3. PRIVACY-PRESERVING INFORMATION FILTERING We identify three main abstract entities participating in an information filtering process within a distributed system: A user entity, a provider entity, and a filter entity. Whereas in some applications the provider and filter entities explicitly trust each other, because they are deployed by the same party, our solution is applicable more generically because it does not require any trust between the main abstract entities. In this paper, we focus on aspects related to the information filtering process itself, and omit all aspects related to information collection and processing, i.e. the stages in which profiles are generated and maintained, mainly because these stages are less critical with regard to privacy, as they involve fewer different entities. 3.1 Requirements Our solution aims at meeting the following requirements with regard to privacy: • User Privacy: No linkable information about user profiles should be acquired permanently by any other entity or external party, including other user entities. Single user profile items, however, may be acquired permanently if they are unlinkable, i.e. if they cannot be attributed to a specific user or linked to other user profile items. Temporary acquisition of private information is permitted as well. Sets of recommendations may be acquired permanently by the provider, but they should not be linkable to a specific user. These concessions simplify the resulting protocol and allow the provider to obtain recommendations and single unlinkable user profile items, and thus to determine frequently requested information and optimize the offered information accordingly. • Provider Privacy: No information about provider profiles, with the exception of the recommendations, should be acquired permanently by other entities or external parties. Again, temporary acquisition of private information is permitted. Additionally, the propagation of provider information is entirely under the control of the provider. Thus, the provider is enabled to prevent misuse such as the automatic large-scale extraction of information. • Filter Privacy: Details of the algorithms applied by the filtering techniques should not be acquired permanently by any other entity or external party. General information about the algorithm may be provided by the filter entity in order to help other entities to reach a decision on whether to apply the respective filtering technique. In addition, general requirements regarding the quality of the recommendations as well as security aspects, performance and broadness of the resulting system have to be addressed as well. While minor trade-offs may be acceptable, the resulting system should reach a level similar to regular Recommender Systems with regard to these requirements. 3.2 Outline of the Solution The basic idea for realizing a protocol fulfilling these privacy-related requirements in Recommender Systems is implied by allowing the temporary acquisition of private information (see [8] for the original approach): User and provider entity both propagate the respective profile data to the filter entity. The filter entity provides the recommendations, and subsequently deletes all private information, thus fulfilling the requirement regarding permanent acquisition of private information. 320 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The entities whose private information is propagated have to be certain that the respective information is actually acquired temporarily only. Trust in this regard may be established in two main ways: • Trusted Software: The respective entity itself is trusted to remove the respective information as specified. • Trusted Environment: The respective entity operates in an environment that is trusted to control the communication and life cycle of the entity to an extent that the removal of the respective information may be achieved regardless of the attempted actions of the entity itself. Additionally, the environment itself is trusted not to act in a malicious manner (e.g. it is trusted not to acquire and propagate the respective information itself). In both cases, trust may be established in various ways. Reputation-based mechanisms, additional trusted third parties certifying entities or environments, or trusted computing mechanisms may be used. Our approach is based on a trusted environment realized via trusted computing mechanisms, because we see this solution as the most generic and realistic approach. This decision is discussed briefly in Section 5. We are now able to specify the abstract information filtering protocol as shown in Figure 1: The filter entity deploys a Temporary Filter Entity (TFE) operating in a trusted environment. The user entity deploys an additional relay entity operating in the same environment. Through mechanisms provided by this environment, the relay entity is able to control the communication of the TFE, and the provider entity is able to control the communication of both relay entity and the TFE. Thus, it is possible to ensure that the controlled entities are only able to propagate recommendations, but no other private information. In the first stage (steps 1.1 to 1.3 of Figure 1), the relay entity establishes control of the TFE, and thus prevents it from propagating user profile information. User profile data is propagated without participation of the provider entity from the user entity to the TFE via the relay entity. In the second stage (steps 2.1 to 2.3 of Figure 1), the provider entity establishes control of both relay and TFE, and thus prevents them from propagating provider profile information. Provider profile data is propagated from the provider entity to the TFE via the relay entity. In the third stage (steps 3.1 to 3.5 of Figure 1), the TFE returns the recommendations via the relay entity, and the controlled entities are terminated. Taken together, these steps ensure that all private information is acquired temporarily only by the other main entities. The problems of determining acceptable queries on the provider profile and ensuring unlinkability of the recommendations are discussed in the following section. Our approach requires each entity in the distributed architecture to have the following five main abilities: The ability to perform certain well-defined tasks (such as carrying out a filtering process) with a high degree of autonomy, i.e. largely independent of other entities (e.g. because the respective entity is not able to communicate in an unrestricted manner), the ability to be deployable dynamically in a well-defined environment, the ability to communicate with other entities, the ability to achieve protection against external manipulation attempts, and the ability to control and restrict the communication of other entities. Figure 1: The abstract privacy-preserving information filtering protocol. All communication across the environments indicated by dashed lines is prevented with the exception of communication with the controlling entity. MAS architectures are an ideal solution for realizing a distributed system characterized by these features, because they provide agents constituting entities that are actually characterized by autonomy, mobility and the ability to communicate [26], as well as agent platforms as environments providing means to realize the security of agents. In this context, the issue of malicious hosts, i.e. hosts attacking agents, has to be addressed explicitly. Furthermore, existing MAS architectures generally do not allow agents to control the communication of other agents. It is possible, however, to expand a MAS architecture and to provide designated agents with this ability. For these reasons, our solution is based on a FIPA [11] - compliant MAS architecture. The entities introduced above are mapped directly to agents, and the trusted environment in which they exist is realized in the form of agent platforms. In addition to the MAS architecture itself, which is assumed as given, our solution consists of the following five main modules: • The Controller Module described in Section 4.1 provides functionality for controlling the communication capabilities of agents. • The Transparent Persistence Module facilitates the use of different data storage mechanisms, and provides a uniform interface for accessing persistent information, which may be utilized for monitoring critical interactions involving potentially private information e.g. as part of queries. Its description is outside the scope of this paper. • The Recommender Module, details of which are described in Section 4.2, provides Recommender System functionality. • The Matchmaker Module provides Matchmaker System functionality. It additionally utilizes social aspects of MAS technology. Its description is outside the scope of this paper. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 321 • Finally, a separate module described in Section 4.4 provides Exemplary Filtering Techniques in order to show that various restrictions imposed on filtering techniques by our approach may actually be fulfilled. The trusted environment introduced above encompasses the MAS architecture itself and the Controller Module, which have to be trusted to act in a non-malicious manner in order to rule out the possibility of malicious hosts. 4. MAIN MODULES AND IMPLEMENTATION In this section, we describe the main modules of our approach, and outline the implementation. While we have chosen a specific architecture for the implementation, the specification of the module is applicable to any FIPA-compliant MAS architecture. A module basically encompasses ontologies, functionality provided by agents via agent services, and internal functionality. Throughout this paper, {m} KX denotes a message m encrypted via a non-specified symmetric encryption scheme with a secret key KX used for encryption and decryption which is initially known only to participant X. A key KXY is a key shared by participants X and Y. A cryptographic hash function is used at various points of the protocol, i.e. a function returning a hash value h (x) for given data x that is both preimage-resistant and collision-resistant1. We denote a set of hash values for a data set X = {x1,. . , xn} as H (X) = {h (x1),. . , h (xn)}, whereas h (X) denotes a single hash value of a data set. 4.1 Controller Module As noted above, the ability to control the communication of agents is generally not a feature of existing MAS architectures2 but at the same time a central requirement of our approach for privacy-preserving Information Filtering. The required functionality cannot be realized based on regular agent services or components, because an agent on a platform is usually not allowed to interfere with the actions of other agents in any way. Therefore, we add additional infrastructure providing the required functionality to the MAS architecture itself, resulting in an agent environment with extended functionality and responsibilities. Controlling the communication capabilities of an agent is realized by restricting via rules, in a manner similar to a firewall, but with the consent of the respective agent, its incoming and outgoing communication to specific platforms or agents on external platforms as well as other possible communication channels, such as the file system. Consent is required because otherwise the overall security would be compromised, as attackers could arbitrarily block various communication channels. Our approach does not require controlling the communication between agents on the same platform, and therefore this aspect is not addressed. Consequently, all rules addressing communication capabilities have to be enforced across entire platforms, because otherwise a controlled agent could just use a non-controlled agent 1In the implementation, we have used the Advanced Encryption Standard (AES) as the symmetric encryption scheme and SHA-1 as the cryptographic hash function. 2A recent survey on agent environments [24] concludes that aspects related to agent environments are often neglected, and does not indicate any existing work in this particular area. on the same platform as a relay for communicating with agents residing on external platforms. Various agent services provide functionality for adding and revoking control of platforms, including functionality required in complex scenarios where controlled agents in turn control further platforms. The implementation of the actual control mechanism depends on the actual MAS architecture. In our implementation, we have utilized methods provided via the Java Security Manager as part of the Java security model. Thus, the supervisor agent is enabled to define custom security policies, thereby granting or denying other agents access to resources required for communication with other agents as well as communication in general, such as files or sockets for TCP/IP-based communication. 4.2 Recommender Module The Recommender Module is mainly responsible for carrying out information filtering processes, according to the protocol described in Table 1. The participating entities are realized as agents, and the interactions as agent services. We assume that mechanisms for secure agent communication are available within the respective MAS architecture. Two issues have to be addressed in this module: The relevant parts of the provider profile have to be retrieved without compromising the user's privacy, and the recommendations have to be propagated in a privacy-preserving way. Our solution is based on a threat model in which no main abstract entity may safely assume any other abstract entity to act in an honest manner: Each entity has to assume that other entities may attempt to obtain private information, either while following the specified protocol or even by deviating from the protocol. According to [15], we classify the former case as honest-but-curious behavior (as an example, the TFE may propagate recommendations as specified, but may additionally attempt to propagate private information), and the latter case as malicious behavior (as an example, the filter may attempt to propagate private information instead of the recommendations). 4.2.1 Retrieving the Provider Profile As outlined above, the relay agent relays data between the TFE agent and the provider agent. These agents are not allowed to communicate directly, because the TFE agent cannot be assumed to act in an honest manner. Unlike the user profile, which is usually rather small, the provider profile is often too voluminous to be propagated as a whole efficiently. A typical example is a user profile containing ratings of about 100 movies, while the provider profile contains some 10,000 movies. Retrieving only the relevant part of the provider profile, however, is problematic because it has to be done without leaking sensitive information about the user profile. Therefore, the relay agent has to analyze all queries on the provider profile, and reject potentially critical queries, such as queries containing a set of user profile items. Because the propagation of single unlinkable user profile items is assumed to be uncritical, we extend the information filtering protocol as follows: The relevant parts of the provider profile are retrieved based on single anonymous interactions between the relay and the provider. If the MAS architecture used for the implementation does not provide an infrastructure for anonymous agent communication, this feature has to be provided explicitly: The most straightforward way is to use additional relay agents deployed via 322 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: The basic information filtering protocol with participants U = user agent, P = provider agent, F = TFE agent, R = relay agent, based on the abstract protocol shown in Figure 1. UP denotes the user profile with UP = {up1,. . , upn}, PP denotes the provider profile, and REC denotes the set of recommendations with REC = {rec1,. . , recm}. the main relay agent and used once for a single anonymous interaction. Obviously, unlinkability is only achieved if multiple instances of the protocol are executed simultaneously between the provider and different users. Because agents on controlled platforms are unable to communicate anonymously with the respective controlling agent, control has to be established after the anonymous interactions have been completed. To prevent the uncontrolled relay agents from propagating provider profile data, the respective data is encrypted and the key is provided only after control has been established. Therefore, the second phase of the protocol described in Table 1 is replaced as described in Table 2. Additionally, the relay agent may allow other interactions as long as no user profile items are used within the queries. In this case, the relay agent has to ensure that the provider does not obtain any information exceeding the information deducible via the recommendations themselves. The clusterbased filtering technique described in Section 4.3 is an example for a filtering technique operating in this manner. 4.2.2 Recommendation Propagation The propagation of the recommendations is even more problematic mainly because more participants are involved: Recommendations have to be propagated from the TFE agent via the relay and provider agent to the user agent. No participant should be able to alter the recommendations or use them for the propagation of private information. Therefore, every participant in this chain has to obtain and verify the recommendations in unencrypted form prior to the next agent in the chain, i.e. the relay agent has to verify the recommendations before the provider obtains them, and so on. Therefore, the final phase of the protocol described in Table 1 is replaced as described in Table 3. It basically consists of two parts (Step 3.1 to 3.4, and Step 3.5 to Step 3.8), each of which provide a solution for a problem related to the prisoners' problem [22], in which two participants (the prisoners) intend to exchange a message via a third, untrusted participant (the warden) who may read the message but must not be able to alter it in an undetectable manner. There are various solutions for protocols addressing the prisoners' prob Table 2: The updated second stage of the information filtering protocol with definitions as above. PPq is the part of the provider profile PP returned as the result of the query q. Table 3: The updated final stage of the information filtering protocol with definitions as above. lem. The more obvious of these, however, such as protocols based on the use of digital signatures, introduce additional threats e.g. via the possibility of additional subliminal channels [22]. In order to minimize the risk of possible threats, we have decided to use a protocol that only requires a symmetric encryption scheme. The first part of the final phase is carried out as follows: In order to prevent the relay from altering recommendations, they are propagated by the filter together with an encrypted hash in Step 3.1. Thus, the relay is able to verify the recommendations before they are propagated further. The relay, however, may suspect the data propagated as the encrypted hash to contain private information instead of the actual hash value. Therefore, the encrypted hash is encrypted again and propagated together with a hash on the respective key in Step 3.2. In Step 3.3, the key KPF is revealed to the relay, allowing the relay to validate the encrypted hash. In Step 3.4, the key KR is revealed to the provider, allowing the provider to decrypt the data received in Step 3.2 and thus to obtain H (REC). Propagating the hash of the key KR prevents the relay from altering the recommendations to REC' after Step 3.3, which would be undetectable otherwise because the relay could choose a key KR, so that {{H (REC)} KPF} KR = {{H (REC')} KPF} KR,. The encryption scheme used for encrypting the hash has to be secure against known-plaintext attacks, because otherwise the relay may be able to obtain KPF after Step 3.1 and subsequently alter the recommendations in an undetectable The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 323 way. Additionally, the encryption scheme must not be commutative for similar reasons. The remaining protocol steps are interactions between relay, provider and user agent. The interactions of Step 3.5 to Step 3.8 ensure, via the same mechanisms used in Step 3.1 to 3.4, that the provider is able to analyze the recommendations before the user obtains them, but at the same time prevent the provider from altering the recommendations. Additionally, the recommendations are not processed at once, but rather one at a time, to prevent the provider from withholding all recommendations. Upon completion of the protocol, both user and provider have obtained a set of recommendations. If the user wants these recommendations to be unlinkable to himself, the user agent has to carry out the entire protocol anonymously. Again, the most straightforward way to achieve this is to use additional relay agents deployed via the user agent which are used once for a single information filtering process. 4.3 Exemplary Filtering Techniques The filtering technique applied by the TFE agent cannot be chosen freely: All collaboration-based approaches, such as collaborative filtering techniques based on the profiles of a set of users, are not applicable because the provider profile does not contain user profile data (unless this data has been collected externally). Instead, these approaches are realized via the Matchmaker Module, which is outside the scope of this paper. Learning-based approaches are not applicable because the TFE agent cannot propagate any acquired data to the filter, which effectively means that the filter is incapable of learning. Filtering techniques that are actually applicable are feature-based approaches, such as content-based filtering (in which profile items are compared via their attributes) and knowledge-based filtering (in which domain-specific knowledge is applied in order to match user and provider profile items). An overview of different classes and hybrid combinations of filtering techniques is given in [5]. We have implemented two generic content-based filtering approaches that are applicable within our approach: A direct content-based filtering technique based on the class of item-based top-N recommendation algorithms [9] is used in cases where the user profile contains items that are also contained in the provider profile. In a preprocessing stage, i.e. prior to the actual information filtering processes, a model is generated containing the k most similar items for each provider profile item. While computationally rather complex, this approach is feasible because it has to be done only once, and it is carried out in a privacy-preserving way via interactions between the provider agent and a TFE agent. The resulting model is stored by the provider agent and can be seen as an additional part of the provider profile. In the actual information filtering process, the k most similar items are retrieved for each single user profile item via queries on the model (as described in Section 4.2.1, this is possible in a privacy-preserving way via anonymous communication). Recommendations are generated by selecting the n most frequent items from the result sets that are not already contained within the user profile. As an alternative approach applicable when the user profile contains information in addition to provider profile items, we provide a cluster-based approach in which provider profile items are clustered in a preprocessing stage via an agglomerative hierarchical clustering approach. Each cluster is represented by a centroid item, and the cluster elements are either sub-clusters or, on the lowest level, the items themselves. In the information filtering stage, the relevant items are retrieved by descending through the cluster hierarchy in the following manner: The cluster items of the highest level are retrieved independent of the user profile. By comparing these items with the user profile data, the most relevant sub-clusters are determined and retrieved in a subsequent iteration. This process is repeated until the lowest level is reached, which contains the items themselves as recommendations. Throughout the process, user profile items are never propagated to the provider as such. The information deducible about the user profile does not exceed the information deducible via the recommendations themselves (because essentially only a chain of cluster centroids leading to the recommendations is retrieved), and therefore it is not regarded as privacy-critical. 4.4 Implementation We have implemented the approach for privacy-preserving IF based on JIAC IV [12], a FIPA-compliant MAS architecture. JIAC IV integrates fundamental aspects of autonomous agents regarding pro-activeness, intelligence, communication capabilities and mobility by providing a scalable component-based architecture. Additionally, JIAC IV offers components realizing management and security functionality, and provides a methodology for Agent-Oriented Software Engineering. JIAC IV stands out among MAS architectures as the only security-certified architecture, since it has been certified by the German Federal Office for Information Security according to the EAL3 of the Common Criteria for Information Technology Security standard [14]. JIAC IV offers several security features in the areas of access control for agent services, secure communication between agents, and low-level security based on Java security policies [21], and thus provides all security-related functionality required for our approach. We have extended the JIAC IV architecture by adding the mechanisms for communication control described in Section 4.1. Regarding the issue of malicious hosts, we currently assume all providers of agent platforms to be trusted. We are additionally developing a solution that is actually based on a trusted computing infrastructure. 5. EVALUATION For the evaluation of our approach, we have examined whether and to which extent the requirements (mainly regarding privacy, performance, and quality) are actually met. Privacy aspects are directly addressed by the modules and protocols described above and therefore not evaluated further here. Performance is a critical issue, mainly because of the overhead caused by creating additional agents and agent platforms for controlling communication, and by the additional interactions within the Recommender Module. Overall, a single information filtering process takes about ten times longer than a non-privacy-preserving information filtering process leading to the same results, which is a considerable overhead but still acceptable under certain conditions, as described in the following section. 5.1 The Smart Event Assistant As a proof of concept, and in order to evaluate performance and quality under real-life conditions, we have ap 324 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 2: The Smart Event Assistant, a privacypreserving Recommender System supporting users in planning entertainment-related activities. plied our approach within the Smart Event Assistant, a MAS-based Recommender System which integrates various personalized services for entertainment planning in different German cities, such as a restaurant finder and a movie finder [25]. Additional services, such as a calendar, a routing service and news services complement the information services. An intelligent day planner integrates all functionality by providing personalized recommendations for the various information services, based on the user's preferences and taking into account the location of the user as well as the potential venues. All services are accessible via mobile devices as well3. Figure 2 shows a screenshot of the intelligent day planner's result dialog. The Smart Event Assistant is entirely realized as a MAS system providing, among other functionality, various filter agents and different service provider agents, which together with the user agents utilize the functionality provided by our approach. Recommendations are generated in two ways: A push service delivers new recommendations to the user in regular intervals (e.g. once per day) via email or SMS. Because the user is not online during these interactions, they are less critical with regard to performance and the protracted duration of the information filtering process is acceptable in this case. Recommendations generated for the intelligent day planner, however, have to be delivered with very little latency because the process is triggered by the user, who expects to receive results promptly. In this scenario, the overall performance is substantially improved by setting up the relay agent and the TFE agent offline, i.e. prior to the user's request, and by an efficient retrieval of the relevant part of the provider profile: Because the user is only interested in items, such as movies, available within a certain time period and related to specific locations, such as screenings at cinemas in a specific city, the relevant part of the provider profile is usually small enough to be propagated entirely. Because these additional parameters are not seen as privacy-critical (as they are not based on the user profile, but rather constitute a short-term information need), the relevant part of the provider profile may be propagated as a whole, with no need for complex interactions. Taken together, these improvements result in a filtering process that takes about three times as long as the respective nonprivacy-preserving filtering process, which we regard as an acceptable trade-off for the increased level of privacy. Table 4 shows the results of the performance evaluation in more detail. In these scenarios, a direct content-based filtering technique similar to the one described in Section 4.3 is applied. Because equivalent filtering techniques have been applied successfully in regular Recommender Systems [9], there are no negative consequences with regard to the quality of the recommendations. 5.2 Alternative Approaches As described in Section 3.2, our solution is based on trusted computing. There are more straightforward ways to realize privacy-preserving IF, e.g. by utilizing a centralized architecture in which the privacy-preserving providerside functionality is realized as trusted software based on trusted computing. However, we consider these approaches to be unsuitable because they are far less generic: Whenever some part of the respective software is patched, upgraded or replaced, the entire system has to be analyzed again in order to determine its trustworthiness, a process that is problematic in itself due to its complexity. In our solution, only a comparatively small part of the overall system is based on trusted computing. Because agent platforms can be utilized for a large variety of tasks, and because we see trusted computing as the most promising approach to realize secure and trusted agent environments, it seems reasonable to assume that these respective mechanisms will be generally available in the future, independent of specific solutions such as the one described here. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 325 6. CONCLUSION & FURTHER WORK We have developed an agent-based approach for privacypreserving Recommender Systems. By utilizing fundamental features of agents such as autonomy, adaptability and the ability to communicate, by extending the capabilities of agent platform managers regarding control of agent communication, by providing a privacy-preserving protocol for information filtering processes, and by utilizing suitable filtering techniques we have been able to realize an approach which actually preserves privacy in Information Filtering architectures in a multilateral way. As a proof of concept, we have used the approach within an application supporting users in planning entertainment-related activities. We envision various areas of future work: To achieve complete user privacy, the protocol should be extended in order to keep the recommendations themselves private as well. Generally, the feedback we have obtained from users of the Smart Event Assistant indicates that most users are indifferent to privacy in the context of entertainment-related personal information. Therefore, we intend to utilize the approach to realize a Recommender System in a more privacysensitive domain, such as health or finance, which would enable us to better evaluate user acceptance.
An Agent-Based Approach for Privacy-Preserving Recommender Systems ABSTRACT Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach. 1. INTRODUCTION Information Filtering (IF) systems aim at countering information overload by extracting information that is relevant for a given user out of a large body of information available via an information provider. In contrast to Information Retrieval (IR) systems, where relevant information 10587 Berlin 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS 2. RELATED WORK There is a large amount of work in related areas, such as Private Information Retrieval [7], Privacy-Preserving Data Mining [2], and other privacy-preserving protocols [4, 16], most of which is based on Secure Multi-Party Computation [27]. We have ruled out Secure Multi-Party Computation approaches mainly because of their complexity, and because the algorithm that is computed securely is not considered to be private in these approaches. Various enforcement mechanisms have been suggested that are applicable in the context of privacy-preserving Information Filtering, such as enterprise privacy policies [17] or hippocratic databases [1], both of which annotate user data with additional meta-information specifying how the data is to be handled on the provider side. These approaches ultimately assume that the provider actually intends to protect the privacy of the user data, and offer support for this task, but they are not intended to prevent the provider from acting in a malicious manner. Trusted computing, as specified by the Trusted Computing Group, aims at realizing trusted systems by increasing the security of open systems to a level comparable with the level of security that is achievable in closed systems. It is based on a combination of tamper-proof hardware and various software components. Some example applications, including peer-to-peer networks, distributed firewalls, and distributed computing in general, are listed in [13]. There are some approaches for privacy-preserving Recommender Systems based on distributed collaborative filtering, in which recommendations are generated via a public model aggregating the distributed user profiles without containing explicit information about user profiles themselves. This is achieved via Secure Multi-Party Computation [6], or via random perturbation of the user data [20]. In [19], various approaches are integrated within a single architecture. In [10], an agent-based approach is described in which user agents representing similar users are discovered via a transitive traversal of user agents. Privacy is preserved through pseudonymous interaction between the agents and through adding obfuscating data to personal information. More recent related approaches are described in [18]. In [3], an agent-based architecture for privacy-preserving demographic filtering is described which may be generalized in order to support other kinds of filtering techniques. While in some aspects similar to our approach, this architecture addresses at least two aspects inadequately, namely the protection of the filter against manipulation attempts, and the prevention of collusions between the filter and the provider. 3. PRIVACY-PRESERVING INFORMATION FILTERING 3.1 Requirements 3.2 Outline of the Solution 320 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. MAIN MODULES AND IMPLEMENTATION 4.1 Controller Module 4.2 Recommender Module 4.2.1 Retrieving the Provider Profile 322 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2.2 Recommendation Propagation The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 323 4.3 Exemplary Filtering Techniques 4.4 Implementation 5. EVALUATION 5.1 The Smart Event Assistant 324 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.2 Alternative Approaches The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 325 6. CONCLUSION & FURTHER WORK We have developed an agent-based approach for privacypreserving Recommender Systems. By utilizing fundamental features of agents such as autonomy, adaptability and the ability to communicate, by extending the capabilities of agent platform managers regarding control of agent communication, by providing a privacy-preserving protocol for information filtering processes, and by utilizing suitable filtering techniques we have been able to realize an approach which actually preserves privacy in Information Filtering architectures in a multilateral way. As a proof of concept, we have used the approach within an application supporting users in planning entertainment-related activities. We envision various areas of future work: To achieve complete user privacy, the protocol should be extended in order to keep the recommendations themselves private as well. Generally, the feedback we have obtained from users of the Smart Event Assistant indicates that most users are indifferent to privacy in the context of entertainment-related personal information. Therefore, we intend to utilize the approach to realize a Recommender System in a more privacysensitive domain, such as health or finance, which would enable us to better evaluate user acceptance.
An Agent-Based Approach for Privacy-Preserving Recommender Systems ABSTRACT Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach. 1. INTRODUCTION Information Filtering (IF) systems aim at countering information overload by extracting information that is relevant for a given user out of a large body of information available via an information provider. In contrast to Information Retrieval (IR) systems, where relevant information 2. RELATED WORK There is a large amount of work in related areas, such as Private Information Retrieval [7], Privacy-Preserving Data Mining [2], and other privacy-preserving protocols [4, 16], most of which is based on Secure Multi-Party Computation [27]. We have ruled out Secure Multi-Party Computation approaches mainly because of their complexity, and because the algorithm that is computed securely is not considered to be private in these approaches. These approaches ultimately assume that the provider actually intends to protect the privacy of the user data, and offer support for this task, but they are not intended to prevent the provider from acting in a malicious manner. It is based on a combination of tamper-proof hardware and various software components. There are some approaches for privacy-preserving Recommender Systems based on distributed collaborative filtering, in which recommendations are generated via a public model aggregating the distributed user profiles without containing explicit information about user profiles themselves. This is achieved via Secure Multi-Party Computation [6], or via random perturbation of the user data [20]. In [19], various approaches are integrated within a single architecture. In [10], an agent-based approach is described in which user agents representing similar users are discovered via a transitive traversal of user agents. Privacy is preserved through pseudonymous interaction between the agents and through adding obfuscating data to personal information. More recent related approaches are described in [18]. In [3], an agent-based architecture for privacy-preserving demographic filtering is described which may be generalized in order to support other kinds of filtering techniques. While in some aspects similar to our approach, this architecture addresses at least two aspects inadequately, namely the protection of the filter against manipulation attempts, and the prevention of collusions between the filter and the provider. 6. CONCLUSION & FURTHER WORK We have developed an agent-based approach for privacypreserving Recommender Systems. As a proof of concept, we have used the approach within an application supporting users in planning entertainment-related activities. We envision various areas of future work: To achieve complete user privacy, the protocol should be extended in order to keep the recommendations themselves private as well. Generally, the feedback we have obtained from users of the Smart Event Assistant indicates that most users are indifferent to privacy in the context of entertainment-related personal information. Therefore, we intend to utilize the approach to realize a Recommender System in a more privacysensitive domain, such as health or finance, which would enable us to better evaluate user acceptance.
J-34
(In)Stability Properties of Limit Order Dynamics
We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets. We ask whether such mechanisms are susceptible to butterfly effects -- the infliction of large changes on common measures of market activity by only small perturbations of the order sequence. We show that the answer depends strongly on whether the market consists of absolute traders (who determine their prices independent of the current order book state) or relative traders (who determine their prices relative to the current bid and ask). We prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability. Our theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks.
[ "modern equiti market", "bid", "absolut trader model", "rel trader model", "standard continu limit-order mechan", "electron commun network", "market microstructur", "high-frequenc microstructur signal", "rel price model", "modern execut optim", "quantit trade strategi", "penni-jump", "comput financ" ]
[ "P", "P", "P", "P", "M", "M", "M", "U", "R", "M", "U", "U", "U" ]
(In)Stability Properties of Limit Order Dynamics Eyal Even-Dar ∗ Sham M. Kakade † Michael Kearns ‡ Yishay Mansour § ABSTRACT We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets. We ask whether such mechanisms are susceptible to butterfly effects - the infliction of large changes on common measures of market activity by only small perturbations of the order sequence. We show that the answer depends strongly on whether the market consists of absolute traders (who determine their prices independent of the current order book state) or relative traders (who determine their prices relative to the current bid and ask). We prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability. Our theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks. Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics, Theory 1. INTRODUCTION In recent years there has been an explosive increase in the automation of modern equity markets. This increase has taken place both in the exchanges, which are increasingly computerized and offer sophisticated interfaces for order placement and management, and in the trading activity itself, which is ever more frequently undertaken by software. The so-called Electronic Communication Networks (or ECNs) that dominate trading in NASDAQ stocks are a common example of the automation of the exchanges. On the trading side, computer programs now are entrusted not only with the careful execution of large block trades for clients (sometimes referred to on Wall Street as program trading), but with the autonomous selection of stocks, direction (long or short) and volumes to trade for profit (commonly referred to as statistical arbitrage). The vast majority of equity trading is done via the standard limit order market mechanism. In this mechanism, continuous trading takes place via the arrival of limit orders specifying whether the party wishes to buy or sell, the volume desired, and the price offered. Arriving limit orders that are entirely or partially executable with the best offers on the other side are executed immediately, with any volume not immediately executable being placed in an queue (or book) ordered by price on the appropriate side (buy or sell). (A detailed description of the limit order mechanism is given in Section 3.) While traders have always been able to view the prices at the top of the buy and sell books (known as the bid and ask), a relatively recent development in certain exchanges is the real-time revelation of the entire order book - the complete distribution of orders, prices and volumes on both sides of the exchange. With this revelation has come the opportunity - and increasingly, the needfor modeling and exploiting limit order data and dynamics. It is fair to say that market microstructure, as this area is generally known, is a topic commanding great interest both in the real markets and in the academic finance literature. The opportunities and needs span the range from the optimized execution of large trades to the creation of stand-alone proprietary strategies that attempt to profit from high-frequency microstructure signals. In this paper we investigate a previously unexplored but fundamental aspect of limit order microstructure: the stability properties of the dynamics. Specifically, we are interested in the following natural question: To what extent are simple models of limit order markets either susceptible or immune to butterfly effects - that is, the infliction of large changes in important activity statistics (such as the 120 number of shares traded or the average price per share) by only minor perturbations of the order sequence? To examine this question, we consider two stylized but natural models of the limit order arrival process. In the absolute price model, buyers and sellers arrive with limit order prices that are determined independently of the current state of the market (as represented by the order books), though they may depend on all manner of exogenous information or shocks, such as time, news events, announcements from the company whose shares are being traded, private signals or state of the individual traders, etc.. This process models traditional fundamentals-based trading, in which market participants each have some inherent but possibly varying valuation for the good that in turn determines their limit price. In contrast, in the relative price model, traders express their limit order prices relative to the best price offered in their respective book (buy or sell). Thus, a buyer would encode their limit order price as an offset ∆ (which may be positive, negative, or zero) from the current bid pb, which is then translated to the limit price pb +∆. Again, in addition to now depending on the state of the order books, prices may also depend on all manner of exogenous information. The relative price model can be viewed as modeling traders who, in addition to perhaps incorporating fundamental external information on the stock, may also position their orders strategically relative to the other orders on their side of the book. A common example of such strategic behavior is known as penny-jumping on Wall Street, in which a trader who has in interest in buying shares quickly, but still at a discount to placing a market order, will deliberately position their order just above the current bid. More generally, the entire area of modern execution optimization [9, 10, 8] has come to rely heavily on the careful positioning of limit orders relative to the current order book state. Note that such positioning may depend on more complex features of the order books than just the current bid and ask, but the relative model is a natural and simplified starting point. We remark that an alternate view of the two models is that all traders behave in a relative manner, but with absolute traders able to act only on a considerably slower time scale than the faster relative traders. How do these two models differ? Clearly, given any fixed sequence of arriving limit order prices, we can choose to express these prices either as their original (absolute) values, or we can run the order book dynamical process and transform each order into a relative difference with the top of its book, and obtain identical results. The differences arise when we consider the stability question introduced above. Intuitively, in the absolute model a small perturbation in the arriving limit price sequence should have limited (but still some) effects on the subsequent evolution of the order books, since prices are determined independently. For the relative model this intuition is less clear. It seems possible that a small perturbation could (for example) slightly modify the current bid, which in turn could slightly modify the price of the next arriving order, which could then slightly modify the price of the subsequent order, and so on, leading to an amplifying sequence of events. Our main results demonstrate that these two models do indeed have dramatically different stability properties. We first show that for any fixed sequence of prices in the absolute model, the modification of a single order has a bounded and extremely limited impact on the subsequent evolution of the books. In particular, we define a natural notion of distance between order books and show that small modifications can result in only constant distance to the original books for all subsequent time steps. We then show that this implies that for almost any standard statistic of market activity - the executed volume, the average price execution price, and many others - the statistic can be influenced only infinitesimally by small perturbations. In contrast, we show that the relative model enjoys no such stability properties. After giving specific (worst-case) relative price sequences in which small perturbations generate large changes in basic statistics (for example, altering the number of shares traded by a factor of two), we proceed to demonstrate that the difference in stability properties of the two models is more than merely theoretical. Using extensive INET (a major ECN for NASDAQ stocks) limit order data and order book reconstruction code, we investigate the empirical stability properties when the data is interpreted as containing either absolute prices, relative prices, or mixtures of the two. The theoretical predictions of stability and instability are strongly borne out by the subsequent experiments. In addition to stability being of fundamental interest in any important dynamical system, we believe that the results described here provide food for thought on the topics of market impact and the backtesting of quantitative trading strategies (the attempt to determine hypothetical past performance using historical data). They suggest that one``s confidence that trading quietly and in small volumes will have minimal market impact is linked to an implicit belief in an absolute price model. Our results and the fact that in the real markets there is a large and increasing amount of relative behavior such as penny-jumping would seem to cast doubts on such beliefs. Similarly, in a purely or largely relative-price world, backtesting even low-frequency, low-volume strategies could result in historical estimates of performance that are not only unrelated to future performance (the usual concern), but are not even accurate measures of a hypothetical past. The outline of the paper follows. In Section 2 we briefly review the large literature on market microstructure. In Section 3 we describe the limit order mechanism and our formal models. Section 4 presents our most important theoretical results, the 1-Modification Theorem for the absolute price model. This theorem is applied in Section 5 to derive a number of strong stability properties in the absolute model. Section 6 presents specific examples establishing the worstcase instability of the relative model. Section 7 contains the simulation studies that largely confirm our theoretical findings on INET market data. 2. RELATED WORK As was mentioned in the Introduction, market microstructure is an important and timely topic both in academic finance and on Wall Street, and consequently has a large and varied recent literature. Here we have space only to summarize the main themes of this literature and to provide pointers to further readings. To our knowledge the stability properties of detailed limit order microstructure dynamics have not been previously considered. (However, see Farmer and Joshi [6] for an example and survey of other price dynamic stability studies.) 121 On the more theoretical side, there is a rich line of work examining what might be considered the game-theoretic properties of limit order markets. These works model traders and market-makers (who provide liquidity by offering both buy and sell quotes, and profit on the difference) by utility functions incorporating tolerance for risks of price movement, large positions and other factors, and examine the resulting equilibrium prices and behaviors. Common findings predict negative price impacts for large trades, and price effects for large inventory holdings by market-makers. An excellent and comprehensive survey of results in this area can be found in [2]. There is a similarly large body of empirical work on microstructure. Major themes include the measurement of price impacts, statistical properties of limit order books, and attempts to establish the informational value of order books [4]. A good overview of the empirical work can be found in [7]. Of particular note for our interests is [3], which empirically studies the distribution of arriving limit order prices in several prominent markets. This work takes a view of arriving prices analogous to our relative model, and establishes a power-law form for the resulting distributions. There is also a small but growing number of works examining market microstructure topics from a computer science perspective, including some focused on the use of microstructure in algorithms for optimized trade execution. Kakade et al. [9] introduced limit order dynamics in competitive analysis for one-way and volume-weighted average price (VWAP) trading. Some recent papers have applied reinforcement learning methods to trade execution using order book properties as state variables [1, 5, 10]. 3. MICROSTRUCTURE PRELIMINARIES The following expository background material is adapted from [9]. The market mechanism we examine in this paper is driven by the simple and standard concept of a limit order. Suppose we wish to purchase 1000 shares of Microsoft (MSFT) stock. In a limit order, we specify not only the desired volume (1000 shares), but also the desired price. Suppose that MSFT is currently trading at roughly $24.07 a share (see Figure 1, which shows an actual snapshot of an MSFT order book on INET), but we are only willing to buy the 1000 shares at $24.04 a share or lower. We can choose to submit a limit order with this specification, and our order will be placed in a queue called the buy order book, which is ordered by price, with the highest offered unexecuted buy price at the top (often referred to as the bid). If there are multiple limit orders at the same price, they are ordered by time of arrival (with older orders higher in the book). In the example provided by Figure 1, our order would be placed immediately after the extant order for 5,503 shares at $24.04; though we offer the same price, this order has arrived before ours. Similarly, a sell order book for sell limit orders is maintained, this time with the lowest sell price offered (often referred to as the ask) at its top. Thus, the order books are sorted from the most competitive limit orders at the top (high buy prices and low sell prices) down to less competitive limit orders. The bid and ask prices together are sometimes referred to as the inside market, and the difference between them as the spread. By definition, the order books always consist exclusively of unexecuted orders - they are queues of orders hopefully waiting for the price to move in their direction. Figure 1: Sample INET order books for MSFT. How then do orders get (partially) executed? If a buy (sell, respectively) limit order comes in above the ask (below the bid, respectively) price, then the order is matched with orders on the opposing books until either the incoming order``s volume is filled, or no further matching is possible, in which case the remaining incoming volume is placed in the books. For instance, suppose in the example of Figure 1 a buy order for 2000 shares arrived with a limit price of $24.08. This order would be partially filled by the two 500-share sell orders at $24.069 in the sell books, the 500-share sell order at $24.07, and the 200-share sell order at $24.08, for a total of 1700 shares executed. The remaining 300 shares of the incoming buy order would become the new bid of the buy book at $24.08. It is important to note that the prices of executions are the prices specified in the limit orders already in the books, not the prices of the incoming order that is immediately executed. Thus in this example, the 1700 executed shares would be at different prices. Note that this also means that in a pure limit order exchange such as INET, market orders can be simulated by limit orders with extreme price values. In exchanges such as INET, any order can be withdrawn or canceled by the party that placed it any time prior to execution. Every limit order arrives atomically and instantaneously - there is a strict temporal sequence in which orders arrive, and two orders can never arrive simultaneously. This gives rise to the definition of the last price of the exchange, which is simply the last price at which the exchange executed an order. It is this quantity that is usually meant when people casually refer to the (ticker) price of a stock. 3.1 Formal Definitions We now provide a formal model for the limit order pro122 cess described above. In this model, limit orders arrive in a temporal sequence, with each order specifying its limit price and an indication of its type (buy or sell). Like the actual exchanges, we also allow cancellation of a standing (unexecuted) order in the books any time prior to its execution. Without loss of generality we limit attention to a model in which every order is for a single share; large order volumes can be represented by 1-share sequences. Definition 3.1. Let Σ = σ1, ...σn be a sequence of limit orders, where each σi has the form ni, ti, vi . Here ni is an order identifier, ti is the order type (buy, sell, or cancel), and vi is the limit order value. In the case that ti is a cancel, ni matches a previously placed order and vi is ignored. We have deliberately called vi in the definition above the limit order value rather than price, since our two models will differ in their interpretation of vi (as being absolute or relative). In the absolute model, we do indeed interpret vi as simply being the price of the limit order. In the relative model, if the current order book configuration is (A, B) (where A is the sell and B the buy book), the price of the order is ask(A) + vi if ti is sell, and bid(B) + vi if ti is buy, where by ask(X) and bid(X) we denote the price of the order at the top of the book X. (Note vi can be negative.) Our main interest in this paper is the effects that the modification of a small number of limit orders can have on the resulting dynamics. For simplicity we consider only modifications to the limit order values, but our results generalize to any modification. Definition 3.2. A k-modification of Σ is a sequence Σ such that for exactly k indices i1, ..., ik vij = vij , tij = tij , and nij = nij . For every = ij , j ∈ {1, ... , k} σ = σ . We now define the various quantities whose stability properties we examine in the absolute and relative models. All of these are standard quantities of common interest in financial markets. • volume(Σ): Number of shares executed (traded) in the sequence Σ. • average(Σ): Average execution price. • close(Σ): Price of the last (closing) execution. • lastbid(Σ): Bid at the end of the sequence. • lastask(Σ): Ask at end of the sequence. 4. THE 1-MODIFICATION THEOREM In this section we provide our most important technical result. It shows that in the absolute model, the effects that the modification of a single order has on the resulting evolution of the order books is extremely limited. We then apply this result to derive strong stability results for all of the aforementioned quantities in the absolute model. Throughout this section, we consider an arbitrary order sequence Σ in the absolute model, and any 1-modification Σ of Σ. At any point (index) i in the two sequences we shall use (A1, B1) to denote the sell and buy books (respectively) in Σ, and (A2, B2) to denote the sell and buy books in Σ ; for notational convenience we omit explicitly superscripting by the current index i. We will shortly establish that at all times i, (A1, B1) and (A2, B2) are very close. Although the order books are sorted by price, we will use (for example) A1 ∪ {a2} = A2 to indicate that A2 contains an order at some price a2 that is not present in A1, but that otherwise A1 and A2 are identical; thus deleting the order at a2 in A2 would render the books the same. Similarly, B1 ∪ {b2} = B2 ∪ {b1} means B1 contains an order at price b1 not present in B2, B2 contains an order at price b2 not present in B1, and that otherwise B1 and B2 are identical. Using this notation, we now define a set of stable system states, where each state is composed from the order books of the original and the modified sequences. Shortly we show that if we change only one order``s value (price), we remain in this set for any sequence of limit orders. Definition 4.1. Let ab be the set of all states (A1, B1) and (A2, B2) such that A1 = A2 and B1 = B2. Let ¯ab be the set of states such that A1 ∪ {a2} = A2 ∪ {a1}, where a1 = a2, and B1 = B2. Let a¯b be the set of states such that B1∪{b2} = B2∪{b1}, where b1 = b2, and A1 = A2. Let ¯a¯b be the set of states in which A1 = A2∪{a1} and B1 = B2∪{b1}, or in which A2 = A1 ∪ {a2} and B2 = B1 ∪ {b2}. Finally we define S = ab ∪ ¯ab ∪ ¯ba ∪ ¯a¯b as the set of stable states. Theorem 4.1. (1-Modification Theorem) Consider any sequence of orders Σ and any 1-modification Σ of Σ. Then the order books (A1, B1) and (A2, B2) determined by Σ and Σ lie in the set S of stable states at all times. ab ¯a¯b a¯b¯ab Figure 2: Diagram representing the set S of stable states and the possible movements transitions in it after the change. The idea of the proof of this theorem is contained in Figure 2, which shows a state transition diagram labeled by the categories of stable states. This diagram describes all transitions that can take place after the arrival of the order on which Σ and Σ differ. The following establishes that immediately after the arrival of this differing order, the state lies in S. Lemma 4.2. If at any time the current books (A1, B1) and (A2, B2) are in the set ab (and thus identical), then modifying the price of the next order keeps the state in S. Proof. Suppose the arriving order is a sell order and we change it from a1 to a2; assume without loss of generality that a1 > a2. If neither order is executed immediately, then we move to state ¯ab; if both of them are executed then we stay in state ab; and if only a2 is executed then we move to state ¯a¯b. The analysis of an arriving buy order is similar. Following the arrival of their only differing order, Σ and Σ are identical. We now give a sequence of lemmas showing 123 Executed with two orders Not executed in both Arrivng buy order Arriving buy order Arriving buy order Arriving sell order ¯ab ab ¯a¯b Executed only with a1 (not a1 and a2) Executed with a1 and a2 Figure 3: The state diagram when starting at state ¯ab. This diagram provides the intuition of Lemma 4.3 that following the initial difference covered by Lemma 4.2, the state remains in S forever on the remaining (identical) sequence. We first show that from state ¯ab we remain in S regardless the next order. The intuition of this lemma is demonstrated in Figure 3. Lemma 4.3. If the current state is in the set ¯ab, then for any order the state will remain in S. Proof. We first provide the analysis for the case of an arriving sell order. Note that in ¯ab the buy books are identical (B1 = B2). Thus either the arriving sell order is executed with the same buy order in both buy books, or it is not executed in both buy books. For the first case, the buy books remain identical (the bid is executed in both) and the sell books remain unchanged. For the second case, the buy books remain unchanged and identical, and the sell books have the new sell order added to both of them (and thus still differ by one order). Next we provide an analysis of the more subtle case where the arriving item is a buy order. For this case we need to take care of several different scenarios. The first is when the top of both sell books (the ask) is identical. Then regardless of whether the new buy order is executed or not, the state remains in ¯ab (the analysis is similar to an arriving sell order). We are left to deal with case where ask(A1) and ask(A2) are different. Here we discuss two subcases: (a) ask(A1) = a1 and ask(A2) = a2, and (b) ask(A1) = a1 and ask(A2) = a . Here a1 and a2 are as in the definition of ¯ab in Definition 4.1, and a is some other price. For subcase (a), by our assumption a1 < a2, then either (1) both asks get executed, the sell books become identical, and we move to state ab; (2) neither ask is executed and we remain in state ¯ab; or (3) only ask(A1) = a1 is executed, in which case we move to state ¯a¯b with A2 = A1 ∪ {a2} and B2 = B1 ∪ {b2}, where b2 is the arriving buy order price. For subcase (b), either (1) buy order is executed in neither sell book we remain in state ¯ab; or (2) the buy order is executed in both sell books and stay in state ¯ab with A1 ∪ {a } = A2 ∪ {a2}; or (3) only ask(A1) = a1 is executed and we move to state ¯a¯b. Lemma 4.4. If the current state is in the set a¯b, then for any order the state will remain in S. Lemma 4.5. If the current configuration is in the set ¯a¯b, then for any order the state will remain in S The proofs of these two lemmas are omitted, but are similar in spirit to that of Lemma 4.3. The next and final lemma deals with cancellations. Lemma 4.6. If the current order book state lies in S, then following the arrival of a cancellation it remains in S. Proof. When a cancellation order arrives, one of the following possibilities holds: (1) the order is still in both sets of books, (2) it is not in either of them and (3) it is only in one of them. For the first two cases it is easy to see that the cancellation effect is identical on both sets of books, and thus the state remains unchanged. For the case when the order appears only in one set of books, without loss of generality we assume that the cancellation cancels a buy order at b1. Rather than removing b1 from the book we can change it to have price 0, meaning this buy order will never be executed and is effectively canceled. Now regardless the state that we were in, b1 is still only in one buy book (but with a different price), and thus we remain in the same state in S. The proof of Theorem 4.1 follows from the above lemmas. 5. ABSOLUTE MODEL STABILITY In this section we apply the 1-Modification Theorem to show strong stability properties for the absolute model. We begin with an examination of the executed volume. Lemma 5.1. Let Σ be any sequence and Σ be any 1modification of Σ. Then the set of the executed orders (ID numbers) generated by the two sequences differs by at most 2. Proof. By Theorem 4.1 we know that at each stage the books differ by at most two orders. Now since the union of the IDs of the executed orders and the order books is always identical for both sequences, this implies that the executed orders can differ by at most two. Corollary 5.2. Let Σ be any sequence and Σ be any kmodification of Σ. Then the set of the executed orders (ID numbers) generated by the two sequences differs by at most 2k. An order sequence Σ is a k-extension of Σ if Σ can be obtained by deleting any k orders in Σ . Lemma 5.3. Let Σ be any sequence and let Σ be any kextension of Σ. Then the set of the executed orders generated by Σ and Σ differ by at most 2k. This lemma is the key to obtain our main absolute model volume result below. We use edit(Σ, Σ ) to denote the standard edit distance between the sequences Σ and Σ - the minimal number of substitutions, insertions and deletions or orders needed to change Σ to Σ . Theorem 5.4. Let Σ and Σ be any absolute model order sequences. Then if edit(Σ, Σ ) ≤ k, the set of the executed orders generated by Σ and Σ differ by at most 4k. In particular, |volume(Σ) − volume(Σ )| ≤ 4k. Proof. We first define the sequence ˜Σ which is the intersection of Σ and Σ . Since Σ and Σ are at most k apart,we have that by k insertions we change ˜Σ to either Σ or Σ , and by Lemma 5.3 its set of executed orders is at most 2k from each. Thus the set of executed orders in Σ and Σ is at most 4k apart. 124 5.1 Spread Bounds Theorem 5.4 establishes strong stability for executed volume in the absolute model. We now turn to the quantities that involve execution prices as opposed to volume alone - namely, average(Σ), close(Σ), lastbid(Σ) and lastask(Σ). For these results, unlike executed volume, a condition must hold on Σ in order for stability to occur. This condition is expressed in terms of a natural measure of the spread of the market, or the gap between the buyers and sellers. We motivate this condition by first showing that without it, by changing one order, we can change average(Σ) by any positive value x. Lemma 5.5. There exists Σ such that for any x ≥ 0, there is a 1-modification Σ of Σ such that average(Σ ) = average(Σ) + x. Proof. Let Σ be a sequence of alternating sell and buy orders in which each seller offers p and each buyer p + x, and the first order is a sell. Then all executions take place at the ask, which is always p, and thus average(Σ) = p. Now suppose we modify only the first sell order to be at price p+1+x. This initial sell order will never be executed, and now all executions take place at the bid, which is always p + x. Similar instability results can be shown to hold for the other price-based quantities. This motivates the introduction of a quantity we call the second spread of the order books, which is defined as the difference between the prices of the second order in the sell book and the second order in the buy book (as opposed to the bid-ask difference, which is commonly called the spread). We note that in a liquid stock, such as those we examine experimentally in Section 7, the second spread will typically be quite small and in fact almost always equal to the spread. In this subsection we consider changes in the sequence only after an initialization period, and sequences such that the second spread is always defined after the time we make a change. We define s2(Σ) to be the maximum second spread in the sequence Σ following the change. Theorem 5.6. Let Σ be a sequence and let Σ be any 1modification of Σ. Then 1. |lastbid(Σ) − lastbid(Σ )| ≤ s2(Σ) 2. |lastask(Σ) − lastask(Σ )| ≤ s2(Σ) where s2(Σ) is the maximum over the second spread in Σ following the 1-modification. Proof. We provide the proof for the last bid; the proof for the last ask is similar. The proof relies on Theorem 4.1 and considers states in the stable set S. For states ab and ¯ab, we have that the bid is identical. Let bid(X), sb(X), ask(X), be the bid, the second highest buy order, and the ask of a sequence X. Now recall that in state a¯b we have that the sell books are identical, and that the two buy books are identical except one different order. Thus bid(Σ)+s2(Σ) ≥ sb(Σ)+s2(Σ) ≥ ask(Σ) = ask(Σ ) ≥ bid(Σ ). Now it remains to bound bid(Σ). Here we use the fact that the bid of the modified sequence is at least the second highest buy order in the original sequence, due to the fact that the books are different only in one order. Since bid(Σ ) ≥ sb(Σ) ≥ ask(Σ) − s2(Σ) ≥ bid(Σ) − s2(Σ) we have that |bid(Σ) − bid(Σ )| ≤ s2(Σ) as desired. In state ¯a¯b we have that for one sequence the books contain an additional buy order and an additional sell order. First suppose that the books containing the additional orders are the original sequence Σ. Now if the bid is not the additional order we are done, otherwise we have the following: bid(Σ) ≤ ask(Σ) ≤ sb(Σ) + s2(Σ) = bid(Σ ) + s2(Σ), where sb(Σ) ≤ bid(Σ ) since the original buy book has only one additional order. Now assume that the books with the additional orders are for the modified sequence Σ . We have bid(Σ) + s2(Σ) ≥ ask(Σ) ≥ ask(Σ ) ≥ bid(Σ ), where we used the fact that ask(Σ) ≥ ask(Σ ) since the modified sequence has an additional order. Similarly we have that bid(Σ) ≤ bid(Σ ) since the modified buy book contains an additional order. We note that the proof of Theorem 5.6 actually establishes that the bid and ask of the original and modified sequences are within s2(Σ) at all times. Next we provide a technical lemma which relates the (first) spread of the modified sequence to the second spread of the original sequence. Lemma 5.7. Let Σ be a sequence and let Σ be any 1modification of Σ. Then the spread of Σ is bounded by s2(Σ). Proof. By the 1-Modification Theorem, we know that the books of the modified sequence and the original sequence can differ by at most one order in each book (buy and sell). Therefore, the second-highest buy order in the original sequence is always at most the bid in the modified sequence, and the second-lowest sell order in the original sequence is always at least the ask of the modified sequence. We are now ready to state a stability result for the average execution price in the absolute model. It establishes that in highly liquid markets, where the executed volume is large and the spread small, the average price is highly stable. Theorem 5.8. Let Σ be a sequence and let Σ be any 1modification of Σ. Then |average(Σ) − average(Σ )| ≤ 2(pmax + s2(Σ)) volume(Σ) + s2(Σ) where pmax is the highest execution price in Σ. Proof. The proof will show that every execution in Σ besides the execution of the modified order and the last execution has a matching execution in Σ with a price different by at most s2(Σ), and will use the fact that pmax + s2(Σ) is a bound on the price in Σ . Referring to the proof of the 1-Modification Theorem, suppose we are in state ¯a¯b, where we have in one sequence (which can be either Σ or Σ ) an additional buy order b and an additional sell order a. Without loss of generality we assume that the sequence with the additional orders is Σ. If the next execution does not involve a or b then clearly we have the same execution in both Σ and Σ . Suppose that it involves a; there are two possibilities. Either a is the modified order, in which case we change the average price 125 difference by (pmax +s2(Σ))/volume(Σ), and this can happen only once; or a was executed before in Σ and the executions both involve an order whose limit price is a. By Lemma 5.7 the spread of both sequences is bounded by s2(Σ), which implies that the price of the execution in Σ was at most a + s2(Σ), while execution is in Σ is at price a, and thus the prices are different by at most s2(Σ). In states ¯ab, a¯b as long as we have concurrent executions in the two sequences, we know that the prices can differ by at most s2(Σ). If we have an execution only in one sequence, we either match it in state ¯a¯b, or charge it by (pmax + s2(Σ))/volume(Σ) if we end at state ¯a¯b. If we end in state ab, ¯ab or a¯b, then every execution in states ¯ab or a¯b were matched to an execution in state ¯a¯b. If we end up in state ¯a¯b, we have the one execution that is not matched and thus we charge it (pmax +s2(Σ))/volume(Σ). We next give a stability result for the closing price. We first provide a technical lemma regarding the prices of consecutive executions. Lemma 5.9. Let Σ be any sequence. Then the prices of two consecutive executions in Σ differ by at most s2(Σ). Proof. Suppose the first execution is taken at time t; its price is bounded below by the current bid and above by the current ask. Now after this execution the bid is at least the second highest buy order at time t, if the former bid was executed and no higher buy orders arrived, and higher otherwise. Similarly, the ask is at most the second lowest sell order at time t. Therefore, the next execution price is at least the second bid at time t and at most the second ask at time t, which is at most s2(Σ) away from the bid/ask at time t. Lemma 5.10. Let Σ be any sequence and let Σ be a 1modification of Σ. If the volume(Σ) ≥ 2, then |close(Σ) − close(Σ )| ≤ s2(Σ) Proof. We first deal with case where the last execution occurs in both sequences simultaneously. By Theorem 5.6, both the ask and the bid of Σ and Σ are at most s2(Σ) apart at every time t. Since the price of the last execution is their asks (bids) at time t we are done. Next we deal with the case where the last execution among the two sequences occurs only in Σ. In this case we know that either the previous execution happened simultaneously in both sequences at time t, and thus all three executions are within the second spread of Σ at time t (the first execution in Σ by definition, the execution at Σ from identical arguments as in the former case, and the third by Lemma 5.9). Otherwise the previous execution happened only in Σ at time t, in which case the two executions are within the the spread of Σ at time t (the execution of Σ from the same arguments as before, and the execution in Σ must be inside its spread in time t). If the last execution happens only in Σ we know that the next execution of Σ will be at most s2(Σ) away from its previous execution by Lemma 5.9. Together with the fact that if an execution happens only in one sequence it implies that the order is in the spread of the second sequence as long as the sequences are 1-modification, the proof is completed. 5.2 Spread Bounds for k-Modifications As in the case of executed volume, we would like to extend the absolute model stability results for price-based quantities to the case where multiple orders are modified. Here our results are weaker and depend on the k-spread, the distance between the kth highest buy order and the kth lowest sell order, instead of the second spread. (Looking ahead to Section 7, we note that in actual market data for liquid stocks, this quantity is often very small as well.) We use sk(Σ) to denote the k-spread. As before, we assume that the k-spread is always defined after an initialization period. We first state the following generalization of Lemma 5.7. Lemma 5.11. Let Σ be a sequence and let Σ be any 1modification of Σ. For ≥ 1, if s +1(Σ) is always defined after the change, then s (Σ ) ≤ s +1(Σ). The proof is similar to the proof of Lemma 5.7 and omitted. A simple application of this lemma is the following: Let Σ be any sequence which is an -modification of Σ. Then we have s2(Σ ) ≤ s +2(Σ). Now using the above lemma and by simple induction we can obtain the following theorem. Theorem 5.12. Let Σ be a sequence and let Σ be any k-modification of Σ. Then 1. |lastbid(Σ) − lastbid(Σ )| ≤ Pk =1 s +1(Σ) ≤ ksk+1(Σ) 2. |lastask(Σ)−lastask(Σ )| ≤ Pk =1 s +1(Σ) ≤ ksk+1(Σ) 3. |close(Σ) − close(Σ )| ≤ Pk =1 s +1(Σ) ≤ ksk+1(Σ) 4. |average(Σ) − average(Σ )| ≤ Pk =1 2(pmax +s +1(Σ)) volume(Σ) + s +1(Σ) where s (Σ) is the maximum over the -spread in Σ following the first modification. We note that while these bounds depend on deeper measures of spread for more modifications, we are working in a 1-share order model. Thus in an actual market, where single orders contain hundreds or thousands of shares, the k-spread even for large k might be quite small and close to the standard 1-spread in liquid stocks. 6. RELATIVE MODEL INSTABILITY In the relative model the underlying assumption is that traders try to exploit their knowledge of the books to strategically place their orders. Thus if a trader wants her buy order to be executed quickly, she may position it above the current bid and be the first in the queue; if the trader is patient and believes that the price trend is going to be downward she will place orders deeper in the buy book, and so on. While in the previous sections we showed stability results for the absolute model, here we provide simple examples which show instability in the relative model for the executed volume, last bid, last ask, average execution price and the last execution price. In Section 7 we provide many simulations on actual market data that demonstrate that this instability is inherent to the relative model, and not due to artificial constructions. In the relative model we assume that for every sequence the ask and bid are always defined, so the books have a non-empty initial configuration. 126 We begin by showing that in the relative model, even a single modification can double the number of shares executed. Theorem 6.1. There is a sequence Σ and a 1-modification Σ of Σ such that volume(Σ ) ≥ 2volume(Σ). Proof. For concreteness we assume that at the beginning the ask is 10 and the bid is 8. The sequence Σ is composed from n buy orders with ∆ = 0, followed by n sell orders with ∆ = 0, and finally an alternating sequence of buy orders with ∆ = +1 and sell orders with ∆ = −1 of length 2n. Since the books before the alternating sequence contain n + 1 sell orders at 10 and n + 1 buy orders at 8, we have that each pair of buy sell order in the alternating part is matched and executed, but none of the initial 2n orders is executed, and thus volume(Σ) = n. Now we change the first buy order to have ∆ = +1. After the first 2n orders there are still no executions; however, the books are different. Now there are n + 1 sell orders at 10, n buy orders at 9 and one buy order at 8. Now each order in the alternating sequence is executed with one of the former orders and we have volume(Σ ) = 2n. The next theorem shows that the spread-based stability results of Section 5.1 do not also hold in the relative model. Before providing the proof, we give its intuition. At the beginning the sell book contains only two prices which are far apart and both contain only two orders, now several buy orders arrive, at the original sequence they are not being executed, while in the modified sequence they will be executed and leave the sell book with only the orders at the high price. Now many sell orders followed by many buy orders will arrive, such that in the original sequence they will be executed only at the low price and in the modified sequence they will executed at the high price. Theorem 6.2. For any positive numbers s and x, there is sequence Σ such that s2(Σ) = s and a 1-modification Σ of Σ such that • |close(Σ) − close(Σ )| ≥ x • |average(Σ) − average(Σ )| ≥ x • |lastbid(Σ) − lastbid(Σ )| ≥ x • |lastask(Σ) − lastask(Σ )| ≥ x Proof. Without loss of generality let us consider sequences in which all prices are integer-valued, in which case the smallest possible value for the second spread is 1; we provide the proof for the case s2(Σ) = 2, but the s2(Σ) = 1 case is similar. We consider a sequence Σ such that after an initialization period there have been no executions, the buy book has 2 orders at price 10, and the sell book has two orders at price 12 and 2 orders with value 12+y, where y is a positive integer that will be determined by the analysis. The original sequence Σ is a buy order with ∆ = 0, followed by two buy orders with ∆ = +1, then 2y sell orders with ∆ = 0, and then 2y buy orders with ∆ = +1. We first note that s2(Σ) = 2, there are 2y executions, all at price 12, the last bid is 11 and the last ask is 12. Next we analyze a modified sequence. We change the first buy order from ∆ = 0 to ∆ = +1. Therefore, the next two buy orders with ∆ = +1 are executed, and afterwards we have that the bid is 11 and the ask is 12 + y. Now the 2y sell orders are accumulated at 12+y, and after the next y buy orders the bid is at 12+y−1. Therefore, at the end we have that lastbid(Σ ) = 12 + y − 1, lastask(Σ ) = 12 + y, close(Σ ) = 12 + y, and average(Σ ) = y y+2 (12 + y) + 2 y+2 (12). Setting y = x + 2, we obtain the lemma for every property. We note that while this proof was based on the fact that there are two consecutive orders in the books which are far (y) apart, we can provide a slightly more complicated example in which all orders are close (at most 2 apart), yet still one change results in large differences. 7. SIMULATION STUDIES The results presented so far paint a striking contrast between the absolute and relative price models: while the absolute model enjoys provably strong stability over any fixed event sequence, there exist at least specific sequences demonstrating great instability in the relative model. The worstcase nature of these results raises the question of the extent to which such differences could actually occur in real markets. In this section we provide indirect evidence on this question by presenting simulation results exploiting a rich source of real-market historical limit order sequence data. By interpreting arriving limit order prices as either absolute values, or by transforming them into differences with the current bid and ask (relative model), we can perform small modifications on the sequences and examine how different various outcomes (volume traded, average price, etc.) would be from what actually occurred in the market. These simulations provide an empirical counterpart to the theory we have developed. We emphasize that all such simulations interpret the actual historical data as falling into either the absolute or relative model, and are meaningful only within the confines of such an interpretation. Nevertheless, we feel they provide valuable empirical insight into the potential (in)stability properties of modern equity limit order markets, and demonstrate that one``s belief or hope in stability largely relies on an absolute model interpretation. We also investigate the empirical behavior of mixtures of absolute and relative prices. 7.1 Data The historical data used in our simulations is commercially available limit order data from INET, the previously mentioned electronic exchange for NASDAQ stocks. Broadly speaking, this data consists of practically every single event on INET regarding the trading of an individual stockevery arriving limit order (price, volume, and sequence ID number), every execution, and every cancellation of a standing order - all timestamped in milliseconds. It is data sufficient to recreate the precise INET order book in a given stock on a given day and time. We will report stability properties for three stocks: Amazon, Nvidia, and Qualcomm (identified in the sequel by their tickers, AMZN, NVDA and QCOM). These three provide some range of liquidities (with QCOM having the greatest and NVDA the least liquidity on INET) and other trading properties. We note that the qualitative results of our simulations were similar for several other stocks we examined. 127 7.2 Methodology For our simulations we employed order-book reconstruction code operating on the underlying raw data. The basic format of each experiment was the following: 1. Run the order book reconstruction code on the original INET data and compute the quantity of interest (volume traded, average price, etc.) 2. Make a small modification to a single order, and recompute the resulting value of the quantity of interest. In the absolute model case, Step 2 is as simple as modifying the order in the original data and re-running the order book reconstruction. For the relative model, we must first pre-process the raw data and convert its prices to relative values, then make the modification and re-run the order book reconstruction on the relative values. The type of modification we examined was extremely small compared to the volume of orders placed in these stocks: namely, the deletion of a single randomly chosen order from the sequence. Although a deletion is not 1-modification, its edit distance is 1 and we can apply Theorem 5.4. For each trading day examined,this single deleted order was selected among those arriving between 10 AM and 3 PM, and the quantities of interest were measured and compared at 3 PM. These times were chosen to include the busiest part of the trading day but avoid the half hour around the opening and closing of the official NASDAQ market (9:30 AM and 3:30 PM respectively), which are known to have different dynamics than the central portion of the day. We run the absolute and relative model simulations on both the raw INET data and on a cleaned version of this data. In the cleaned we remove all limit orders that were canceled in the actual market prior to their execution (along with the cancellations themselves). The reason is that such cancellations may often be the first step in the repositioning of orders - that is, cancellations of the order that are followed by the submission of a replacement order at a different price. Not removing canceled orders allows the possibility of modified simulations in which the same order 1 is executed twice, which may magnify instability effects. Again, it is clear that neither the raw nor the cleaned data can perfectly reflect what would have happened under the deleted orders in the actual market. However, the results both from the raw data and the clean data are qualitatively similar. The results mainly differ, as expected, in the executed volume, where the instability results for the relative model are much more dramatic in the raw data. 7.3 Results We begin with summary statistics capturing our overall stability findings. Each row of the tables below contains a ticker (e.g. AMZN) followed by either -R (for the uncleaned or raw data) or -C (for the data with canceled orders removed). For each of the approximately 250 trading days in 2003, 1000 trials were run in which a randomly selected order was deleted from the INET event sequence. For each quantity of interest (volume executed, average price, closing price and last bid), we show for the both the absolute and 1 Here same is in quotes since the two orders will actually have different sequence ID numbers, which is what makes such repositioning activity impossible to reliably detect in the data. relative model the average percentage change in the quantity induced by the deletion. The results confirm rather strikingly the qualitative conclusions of the theory we have developed. In virtually every case (stock, raw or cleaned data, and quantity) the percentage change induced by a single deletion in the relative model is many orders of magnitude greater than in the absolute model, and shows that indeed butterfly effects may occur in a relative model market. As just one specific representative example, notice that for QCOM on the cleaned data, the relative model effect of just a single deletion on the closing price is in excess of a full percentage point. This is a variety of market impact entirely separate from the more traditional and expected kind generated by trading a large volume of shares. Stock Date volume average Rel Abs Rel Abs AMZN-R 2003 15.1% 0.04% 0.3% 0.0002% AMZN-C 2003 0.69% 0.087% 0.36% 0.0007% NVDA-R 2003 9.09% 0.05 % 0.17% 0.0003% NVDA-C 2003 0.73% 0.09 % 0.35% 0.001% QCOM-R 2003 16.94% 0.035% 0.21% 0.0002% QCOM-C 2003 0.58% 0.06% 0.35% 0.0005% Stock Date close lastbid Rel Abs Rel Abs AMZN-R 2003 0.78% 0.0001% 0.78% 0.0007% AMZN-C 2003 1.10% 0.077% 1.11% 0.001% NVDA-R 2003 1.17% 0.002 % 1.18 % 0.08% NVDA-C 2003 0.45% 0.0003% 0.45% 0.0006% QCOM-R 2003 0.58% 0.0001% 0.58% 0.0004% QCOM-C 2003 1.05% 0.0006% 1.05% 0.06% In Figure 4 we examine how the change to one the quantities, the average execution price, grows with the introduction of greater perturbations of the event sequence in the two models. Rather than deleting only a single order between 10 AM and 3 PM, in these experiments a growing number of randomly chosen deletions was performed, and the percentage change to the average price measured. As suggested by the theory we have developed, for the absolute model the change to the average price grows linearly with the number of deletions and remains very small (note the vastly different scales of the y-axis in the panels for the absolute and relative models in the figure). For the relative model, it is interesting to note that while small numbers of changes have large effects (often causing average execution price changes well in excess of 0.1 percent), the effects of large numbers of changes levels off quite rapidly and consistently. We conclude with an examination of experiments with a mixture model. Even if one accepts a world in which traders behave in either an absolute or relative manner, one would be likely to claim that the market contains a mixture of both. We thus ran simulations in which each arriving order in the INET event streams was treated as an absolute price with probability α, and as a relative price with probability 1−α. Representative results for the average execution price in this mixture model are shown in Figure 5 for AMZN and NVDA. Perhaps as expected, we see a monotonic decrease in the percentage change (instability) as the fraction of absolute traders increases, with most of the reduction already being realized by the introduction of just a small population of absolute traders. Thus even in a largely relative-price world, a 128 0 10 20 30 40 50 60 70 80 90 100 0 0.5 1 1.5 2 2.5 x 10 −3 QCOM−R June 2004: Absolute Number of changes Averageprice 0 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 QCOM−R June 2004: Relative Number of changes Averageprice Figure 4: Percentage change to the average execution price (y-axis) as a function of the number of deletions to the sequence (x-axis). The left panel is for the absolute model, the right panel for the relative model, and each curve corresponds to a single day of QCOM trading in June 2004. Curves represent averages over 1000 trials. small minority of absolute traders can have a greatly stabilizing effect. Similar behavior is found for closing price and last bid. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 AMZN−R Feburary 2004 α Averageprice 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 NVDA−R June 2004 α Averageprice Figure 5: Percentage change to the average execution price (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis). Each curve corresponds to a single day of trading during a month of 2004. Curves represent averages over 1000 trials. For the executed volume in the mixture model, however, the findings are more curious. In Figure 6, we show how the percentage change to the executed volume varies with the absolute trader fraction α, for NVDA data that is both raw and cleaned of cancellations. We first see that for this quantity, unlike the others, the difference induced by the cleaned and uncleaned data is indeed dramatic, as already suggested by the summary statistics table above. But most intriguing is the fact that the stability is not monotonically increasing with α for either the cleaned or uncleaned datathe market with maximum instability is not a pure relative price market, but occurs at some nonzero value for α. It was in fact not obvious to us that sequences with this property could even be artificially constructed, much less that they would occur as actual market data. We have yet to find a satisfying explanation for this phenomenon and leave it to future research. 8. ACKNOWLEDGMENTS We are grateful to Yuriy Nevmyvaka of Lehman Brothers in New York for the use of his INET order book reconstruction code, and for valuable comments on the work presented 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 1.5 2 2.5 3 3.5 4 NVDA−C June 2004 α Volume 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8 10 12 14 16 18 NVDA−R June 2004 α Volume Figure 6: Percentage change to the executed volume (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis). The left panel is for NVDA using the raw data that includes cancellations, while the right panel is on the cleaned data. Each curve corresponds to a single day of trading during June 2004. Curves represent averages over 1000 trials. here. Yishay Mansour was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778, by a grant from the Israel Science Foundation and an IBM faculty award. 9. REFERENCES [1] D. Bertsimas and A. Lo. Optimal control of execution costs. Journal of Financial Markets, 1:1-50, 1998. [2] B. Biais, L. Glosten, and C. Spatt. Market microstructure: a survey of microfoundations, empirical results and policy implications. Journal of Financial Markets, 8:217-264, 2005. [3] J.-P. Bouchaud, M. Mezard, and M. Potters. Statistical properties of stock order books: empirical results and models. Quantitative Finance, 2:251-256, 2002. [4] C. Cao, O.Hansch, and X. Wang. The informational content of an open limit order book, 2004. AFA 2005 Philadelphia Meetings, EFA Maastricht Meetings Paper No. 4311. [5] R. Coggins, A. Blazejewski, and M. Aitken. Optimal trade execution of equities in a limit order market. In International Conference on Computational Intelligence for Financial Engineering, pages 371-378, March 2003. [6] D. Farmer and S. Joshi. The price dynamics of common trading strategies. Journal of Economic Behavior and Organization, 29:149-171, 2002. [7] J. Hasbrouck. Empirical market microstructure: Economic and statistical perspectives on the dynamics of trade in securities markets, 2004. Course notes, Stern School of Business, New York University. [8] R. Kissell and M. Glantz. Optimal Trading Strategies. Amacom, 2003. [9] S.Kakade, M. Kearns, Y. Mansour, and L. Ortiz. Competitive algorithms for VWAP and limit order trading. In Proceedings of the ACM Conference on Electronic Commerce, pages 189-198, 2004. [10] Y.Nevmyvaka, Y. Feng, and M. Kearns. Reinforcement learning for optimized trade execution, 2006. Preprint. 129
(In) Stability Properties of Limit Order Dynamics ABSTRACT We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets. We ask whether such mechanisms are susceptible to "butterfly effects"--the infliction of large changes on common measures of market activity by only small perturbations of the order sequence. We show that the answer depends strongly on whether the market consists of "absolute" traders (who determine their prices independent of the current order book state) or "relative" traders (who determine their prices relative to the current bid and ask). We prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability. Our theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks. 1. INTRODUCTION In recent years there has been an explosive increase in the automation of modern equity markets. This increase has taken place both in the exchanges, which are increasingly computerized and offer sophisticated interfaces for order placement and management, and in the trading activity itself, which is ever more frequently undertaken by software. The so-called Electronic Communication Networks (or ECNs) that dominate trading in NASDAQ stocks are a common example of the automation of the exchanges. On the trading side, computer programs now are entrusted not only with the careful execution of large block trades for clients (sometimes referred to on Wall Street as program trading), but with the autonomous selection of stocks, direction (long or short) and volumes to trade for profit (commonly referred to as statistical arbitrage). The vast majority of equity trading is done via the standard limit order market mechanism. In this mechanism, continuous trading takes place via the arrival of limit orders specifying whether the party wishes to buy or sell, the volume desired, and the price offered. Arriving limit orders that are entirely or partially executable with the best offers on the other side are executed immediately, with any volume not immediately executable being placed in an queue (or book) ordered by price on the appropriate side (buy or sell). (A detailed description of the limit order mechanism is given in Section 3.) While traders have always been able to view the prices at the top of the buy and sell books (known as the bid and ask), a relatively recent development in certain exchanges is the real-time revelation of the entire order book--the complete distribution of orders, prices and volumes on both sides of the exchange. With this revelation has come the opportunity--and increasingly, the need--for modeling and exploiting limit order data and dynamics. It is fair to say that market microstructure, as this area is generally known, is a topic commanding great interest both in the real markets and in the academic finance literature. The opportunities and needs span the range from the optimized execution of large trades to the creation of stand-alone "proprietary" strategies that attempt to profit from high-frequency microstructure signals. In this paper we investigate a previously unexplored but fundamental aspect of limit order microstructure: the stability properties of the dynamics. Specifically, we are interested in the following natural question: To what extent are simple models of limit order markets either susceptible or immune to "butterfly effects"--that is, the infliction of large changes in important activity statistics (such as the number of shares traded or the average price per share) by only minor perturbations of the order sequence? To examine this question, we consider two stylized but natural models of the limit order arrival process. In the absolute price model, buyers and sellers arrive with limit order prices that are determined independently of the current state of the market (as represented by the order books), though they may depend on all manner of exogenous information or shocks, such as time, news events, announcements from the company whose shares are being traded, private signals or state of the individual traders, etc. . This process models traditional "fundamentals" - based trading, in which market participants each have some inherent but possibly varying valuation for the good that in turn determines their limit price. In contrast, in the relative price model, traders express their limit order prices relative to the best price offered in their respective book (buy or sell). Thus, a buyer would encode their limit order price as an offset A (which may be positive, negative, or zero) from the current bid Pb, which is then translated to the limit price Pb + A. Again, in addition to now depending on the state of the order books, prices may also depend on all manner of exogenous information. The relative price model can be viewed as modeling traders who, in addition to perhaps incorporating fundamental external information on the stock, may also position their orders strategically relative to the other orders on their side of the book. A common example of such strategic behavior is known as "penny-jumping" on Wall Street, in which a trader who has in interest in buying shares quickly, but still at a discount to placing a market order, will deliberately position their order just above the current bid. More generally, the entire area of modern execution optimization [9, 10, 8] has come to rely heavily on the careful positioning of limit orders relative to the current order book state. Note that such positioning may depend on more complex features of the order books than just the current bid and ask, but the relative model is a natural and simplified starting point. We remark that an alternate view of the two models is that all traders behave in a relative manner, but with "absolute" traders able to act only on a considerably slower time scale than the faster "relative" traders. How do these two models differ? Clearly, given any fixed sequence of arriving limit order prices, we can choose to express these prices either as their original (absolute) values, or we can run the order book dynamical process and transform each order into a relative difference with the top of its book, and obtain identical results. The differences arise when we consider the stability question introduced above. Intuitively, in the absolute model a small perturbation in the arriving limit price sequence should have limited (but still some) effects on the subsequent evolution of the order books, since prices are determined independently. For the relative model this intuition is less clear. It seems possible that a small perturbation could (for example) slightly modify the current bid, which in turn could slightly modify the price of the next arriving order, which could then slightly modify the price of the subsequent order, and so on, leading to an amplifying sequence of events. Our main results demonstrate that these two models do indeed have dramatically different stability properties. We first show that for any fixed sequence of prices in the absolute model, the modification of a single order has a bounded and extremely limited impact on the subsequent evolution of the books. In particular, we define a natural notion of distance between order books and show that small modifications can result in only constant distance to the original books for all subsequent time steps. We then show that this implies that for almost any standard statistic of market activity--the executed volume, the average price execution price, and many others--the statistic can be influenced only infinitesimally by small perturbations. In contrast, we show that the relative model enjoys no such stability properties. After giving specific (worst-case) relative price sequences in which small perturbations generate large changes in basic statistics (for example, altering the number of shares traded by a factor of two), we proceed to demonstrate that the difference in stability properties of the two models is more than merely theoretical. Using extensive INET (a major ECN for NASDAQ stocks) limit order data and order book reconstruction code, we investigate the empirical stability properties when the data is interpreted as containing either absolute prices, relative prices, or mixtures of the two. The theoretical predictions of stability and instability are strongly borne out by the subsequent experiments. In addition to stability being of fundamental interest in any important dynamical system, we believe that the results described here provide food for thought on the topics of market impact and the "backtesting" of quantitative trading strategies (the attempt to determine hypothetical past performance using historical data). They suggest that one's confidence that trading "quietly" and in small volumes will have minimal market impact is linked to an implicit belief in an absolute price model. Our results and the fact that in the real markets there is a large and increasing amount of relative behavior such as penny-jumping would seem to cast doubts on such beliefs. Similarly, in a purely or largely relative-price world, backtesting even low-frequency, low-volume strategies could result in historical estimates of performance that are not only unrelated to future performance (the usual concern), but are not even accurate measures of a hypothetical past. The outline of the paper follows. In Section 2 we briefly review the large literature on market microstructure. In Section 3 we describe the limit order mechanism and our formal models. Section 4 presents our most important theoretical results, the 1-Modification Theorem for the absolute price model. This theorem is applied in Section 5 to derive a number of strong stability properties in the absolute model. Section 6 presents specific examples establishing the worstcase instability of the relative model. Section 7 contains the simulation studies that largely confirm our theoretical findings on INET market data. 2. RELATED WORK As was mentioned in the Introduction, market microstructure is an important and timely topic both in academic finance and on Wall Street, and consequently has a large and varied recent literature. Here we have space only to summarize the main themes of this literature and to provide pointers to further readings. To our knowledge the stability properties of detailed limit order microstructure dynamics have not been previously considered. (However, see Farmer and Joshi [6] for an example and survey of other price dynamic stability studies.) On the more theoretical side, there is a rich line of work examining what might be considered the game-theoretic properties of limit order markets. These works model traders and market-makers (who provide liquidity by offering both buy and sell quotes, and profit on the difference) by utility functions incorporating tolerance for risks of price movement, large positions and other factors, and examine the resulting equilibrium prices and behaviors. Common findings predict negative price impacts for large trades, and price effects for large inventory holdings by market-makers. An excellent and comprehensive survey of results in this area can be found in [2]. There is a similarly large body of empirical work on microstructure. Major themes include the measurement of price impacts, statistical properties of limit order books, and attempts to establish the informational value of order books [4]. A good overview of the empirical work can be found in [7]. Of particular note for our interests is [3], which empirically studies the distribution of arriving limit order prices in several prominent markets. This work takes a view of arriving prices analogous to our relative model, and establishes a power-law form for the resulting distributions. There is also a small but growing number of works examining market microstructure topics from a computer science perspective, including some focused on the use of microstructure in algorithms for optimized trade execution. Kakade et al. [9] introduced limit order dynamics in competitive analysis for one-way and volume-weighted average price (VWAP) trading. Some recent papers have applied reinforcement learning methods to trade execution using order book properties as state variables [1, 5, 10]. 3. MICROSTRUCTURE PRELIMINARIES The following expository background material is adapted from [9]. The market mechanism we examine in this paper is driven by the simple and standard concept of a limit order. Suppose we wish to purchase 1000 shares of Microsoft (MSFT) stock. In a limit order, we specify not only the desired volume (1000 shares), but also the desired price. Suppose that MSFT is currently trading at roughly $24.07 a share (see Figure 1, which shows an actual snapshot of an MSFT order book on INET), but we are only willing to buy the 1000 shares at $24.04 a share or lower. We can choose to submit a limit order with this specification, and our order will be placed in a queue called the buy order book, which is ordered by price, with the highest offered unexecuted buy price at the top (often referred to as the bid). If there are multiple limit orders at the same price, they are ordered by time of arrival (with older orders higher in the book). In the example provided by Figure 1, our order would be placed immediately after the extant order for 5,503 shares at $24.04; though we offer the same price, this order has arrived before ours. Similarly, a sell order book for sell limit orders is maintained, this time with the lowest sell price offered (often referred to as the ask) at its top. Thus, the order books are sorted from the most competitive limit orders at the top (high buy prices and low sell prices) down to less competitive limit orders. The bid and ask prices together are sometimes referred to as the inside market, and the difference between them as the spread. By definition, the order books always consist exclusively of unexecuted orders--they are queues of orders hopefully waiting for the price to move in their direction. Figure 1: Sample INET order books for MSFT. How then do orders get (partially) executed? If a buy (sell, respectively) limit order comes in above the ask (below the bid, respectively) price, then the order is matched with orders on the opposing books until either the incoming order's volume is filled, or no further matching is possible, in which case the remaining incoming volume is placed in the books. For instance, suppose in the example of Figure 1 a buy order for 2000 shares arrived with a limit price of $24.08. This order would be partially filled by the two 500-share sell orders at $24.069 in the sell books, the 500-share sell order at $24.07, and the 200-share sell order at $24.08, for a total of 1700 shares executed. The remaining 300 shares of the incoming buy order would become the new bid of the buy book at $24.08. It is important to note that the prices of executions are the prices specified in the limit orders already in the books, not the prices of the incoming order that is immediately executed. Thus in this example, the 1700 executed shares would be at different prices. Note that this also means that in a pure limit order exchange such as INET, market orders can be "simulated" by limit orders with extreme price values. In exchanges such as INET, any order can be withdrawn or canceled by the party that placed it any time prior to execution. Every limit order arrives atomically and instantaneously--there is a strict temporal sequence in which orders arrive, and two orders can never arrive simultaneously. This gives rise to the definition of the last price of the exchange, which is simply the last price at which the exchange executed an order. It is this quantity that is usually meant when people casually refer to the (ticker) price of a stock. 3.1 Formal Definitions We now provide a formal model for the limit order pro cess described above. In this model, limit orders arrive in a temporal sequence, with each order specifying its limit price and an indication of its type (buy or sell). Like the actual exchanges, we also allow cancellation of a standing (unexecuted) order in the books any time prior to its execution. Without loss of generality we limit attention to a model in which every order is for a single share; large order volumes can be represented by 1-share sequences. DEFINITION 3.1. Let E = (v1,...vn) be a sequence of limit orders, where each Qi has the form (ni, ti, vi). Here ni is an order identifier, ti is the order type (buy, sell, or cancel), and vi is the limit order value. In the case that ti is a cancel, ni matches a previously placed order and vi is ignored. We have deliberately called vi in the definition above the limit order value rather than price, since our two models will differ in their interpretation of vi (as being absolute or relative). In the absolute model, we do indeed interpret vi as simply being the price of the limit order. In the relative model, if the current order book configuration is (A, B) (where A is the sell and B the buy book), the price of the order is ask (A) + vi if ti is sell, and bid (B) + vi if ti is buy, where by ask (X) and bid (X) we denote the price of the order at the top of the book X. (Note vi can be negative.) Our main interest in this paper is the effects that the modification of a small number of limit orders can have on the resulting dynamics. For simplicity we consider only modifications to the limit order values, but our results generalize to any modification. DEFINITION 3.2. A k-modification of E is a sequence E' such that for exactly k indices i1,..., ik vij = v' ij, tij = t' ij, and nij = n'ij. For every f = ij, j E {1,..., k} v = v'. We now define the various quantities whose stability properties we examine in the absolute and relative models. All of these are standard quantities of common interest in financial markets. • volume (E): Number of shares executed (traded) in the sequence E. • average (E): Average execution price. • close (E): Price of the last (closing) execution. • lastbid (E): Bid at the end of the sequence. • lastask (E): Ask at end of the sequence. 4. THE 1-MODIFICATION THEOREM In this section we provide our most important technical result. It shows that in the absolute model, the effects that the modification of a single order has on the resulting evolution of the order books is extremely limited. We then apply this result to derive strong stability results for all of the aforementioned quantities in the absolute model. Throughout this section, we consider an arbitrary order sequence E in the absolute model, and any 1-modification E' of E. At any point (index) i in the two sequences we shall use (A1, B1) to denote the sell and buy books (respectively) in E, and (A2, B2) to denote the sell and buy books in E'; for notational convenience we omit explicitly superscripting by the current index i. We will shortly establish that at all times i, (A1, B1) and (A2, B2) are very "close". Although the order books are sorted by price, we will use (for example) A1 U {a2} = A2 to indicate that A2 contains an order at some price a2 that is not present in A1, but that otherwise A1 and A2 are identical; thus deleting the order at a2 in A2 would render the books the same. Similarly, B1 U {b2} = B2 U {b1} means B1 contains an order at price b1 not present in B2, B2 contains an order at price b2 not present in B1, and that otherwise B1 and B2 are identical. Using this notation, we now define a set of stable system states, where each state is composed from the order books of the original and the modified sequences. Shortly we show that if we change only one order's value (price), we remain in this set for any sequence of limit orders. DEFINITION 4.1. Let ab be the set of all states (A1, B1) and (A2, B2) such that A1 = A2 and B1 = B2. Let ¯ ab be the set of states such that A1 U {a2} = A2 U {a1}, where a1 = a2, and B1 = B2. Let a ¯ b be the set of states such that B1U {b2} = B2U {b1}, where b1 = b2, and A1 = A2. Let ¯ a ¯ b be the set of states in which A1 = A2U {a1} and B1 = B2U {b1}, or in which A2 = A1 U {a2} and B2 = B1 U {b2}. Finally we define S = ab U ¯ ab U ¯ ba U ¯ a ¯ b as the set of stable states. THEOREM 4.1. (1-Modification Theorem) Consider any sequence of orders E and any 1-modification E' of E. Then the order books (A1, B1) and (A2, B2) determined by E and E' lie in the set S of stable states at all times. Figure 2: Diagram representing the set S of stable states and the possible movements transitions in it after the change. The idea of the proof of this theorem is contained in Figure 2, which shows a state transition diagram labeled by the categories of stable states. This diagram describes all transitions that can take place after the arrival of the order on which E and E' differ. The following establishes that immediately after the arrival of this differing order, the state lies in S. LEMMA 4.2. If at any time the current books (A1, B1) and (A2, B2) are in the set ab (and thus identical), then modifying the price of the next order keeps the state in S. PROOF. Suppose the arriving order is a sell order and we change it from a1 to a2; assume without loss of generality that a1> a2. If neither order is executed immediately, then we move to state ¯ ab; if both of them are executed then we stay in state ab; and if only a2 is executed then we move to state ¯ a ¯ b. The analysis of an arriving buy order is similar. Following the arrival of their only differing order, E and E' are identical. We now give a sequence of lemmas showing Executed with two orders (not a1 and a2) Figure 3: The state diagram when starting at state ¯ ab. This diagram provides the intuition of Lemma 4.3 that following the initial difference covered by Lemma 4.2, the state remains in S forever on the remaining (identical) sequence. We first show that from state ¯ ab we remain in S regardless the next order. The intuition of this lemma is demonstrated in Figure 3. LEMMA 4.3. If the current state is in the set ¯ ab, then for any order the state will remain in S. PROOF. We first provide the analysis for the case of an arriving sell order. Note that in ¯ ab the buy books are identical (B1 = B2). Thus either the arriving sell order is executed with the same buy order in both buy books, or it is not executed in both buy books. For the first case, the buy books remain identical (the bid is executed in both) and the sell books remain unchanged. For the second case, the buy books remain unchanged and identical, and the sell books have the new sell order added to both of them (and thus still differ by one order). Next we provide an analysis of the more subtle case where the arriving item is a buy order. For this case we need to take care of several different scenarios. The first is when the top of both sell books (the ask) is identical. Then regardless of whether the new buy order is executed or not, the state remains in ¯ ab (the analysis is similar to an arriving sell order). We are left to deal with case where ask (A1) and ask (A2) are different. Here we discuss two subcases: (a) ask (A1) = a1 and ask (A2) = a2, and (b) ask (A1) = a1 and ask (A2) = a'. Here a1 and a2 are as in the definition of ¯ ab in Definition 4.1, and a' is some other price. For subcase (a), by our assumption a1 <a2, then either (1) both asks get executed, the sell books become identical, and we move to state ab; (2) neither ask is executed and we remain in state ¯ ab; or (3) only ask (A1) = a1 is executed, in which case we move to state ¯ a ¯ b with A2 = A1 U {a2} and B2 = B1 U {b2}, where b2 is the arriving buy order price. For subcase (b), either (1) buy order is executed in neither sell book we remain in state ¯ ab; or (2) the buy order is executed in both sell books and stay in state ¯ ab with A1 U {a'} = A2 U {a2}; or (3) only ask (A1) = a1 is executed and we move to state ¯ a ¯ b. LEMMA 4.4. If the current state is in the set a ¯ b, then for any order the state will remain in S. LEMMA 4.5. If the current configuration is in the set ¯ a ¯ b, then for any order the state will remain in S The proofs of these two lemmas are omitted, but are similar in spirit to that of Lemma 4.3. The next and final lemma deals with cancellations. LEMMA 4.6. If the current order book state lies in S, then following the arrival of a cancellation it remains in S. PROOF. When a cancellation order arrives, one of the following possibilities holds: (1) the order is still in both sets of books, (2) it is not in either of them and (3) it is only in one of them. For the first two cases it is easy to see that the cancellation effect is identical on both sets of books, and thus the state remains unchanged. For the case when the order appears only in one set of books, without loss of generality we assume that the cancellation cancels a buy order at b1. Rather than removing b1 from the book we can change it to have price 0, meaning this buy order will never be executed and is effectively canceled. Now regardless the state that we were in, b1 is still only in one buy book (but with a different price), and thus we remain in the same state in S. The proof of Theorem 4.1 follows from the above lemmas. 5. ABSOLUTE MODEL STABILITY In this section we apply the 1-Modification Theorem to show strong stability properties for the absolute model. We begin with an examination of the executed volume. LEMMA 5.1. Let E be any sequence and E' be any 1modification of E. Then the set of the executed orders (ID numbers) generated by the two sequences differs by at most 2. PROOF. By Theorem 4.1 we know that at each stage the books differ by at most two orders. Now since the union of the IDs of the executed orders and the order books is always identical for both sequences, this implies that the executed orders can differ by at most two. COROLLARY 5.2. Let E be any sequence and E' be any kmodification of E. Then the set of the executed orders (ID numbers) generated by the two sequences differs by at most 2k. An order sequence E' is a k-extension of E if E can be obtained by deleting any k orders in V. LEMMA 5.3. Let E be any sequence and let E' be any kextension of E. Then the set of the executed orders generated by E and E' differ by at most 2k. This lemma is the key to obtain our main absolute model volume result below. We use edit (E, E') to denote the standard edit distance between the sequences E and E'--the minimal number of substitutions, insertions and deletions or orders needed to change E to V. THEOREM 5.4. Let E and E' be any absolute model order sequences. Then if edit (E, E') <k, the set of the executed orders generated by E and E' differ by at most 4k. In particular, | volume (E) − volume (E') | <4k. PROOF. We first define the sequence E˜ which is the intersection of E and V. Since E and E' are at most k apart, we have that by k insertions we change E˜ to either E or E', and by Lemma 5.3 its set of executed orders is at most 2k from each. Thus the set of executed orders in E and E' is at most 5.1 Spread Bounds Theorem 5.4 establishes strong stability for executed volume in the absolute model. We now turn to the quantities that involve execution prices as opposed to volume alone--namely, average (E), close (E), lastbid (E) and lastask (E). For these results, unlike executed volume, a condition must hold on E in order for stability to occur. This condition is expressed in terms of a natural measure of the spread of the market, or the gap between the buyers and sellers. We motivate this condition by first showing that without it, by changing one order, we can change average (E) by any positive value x. LEMMA 5.5. There exists E such that for any x 0, there is a 1-modification E' of E such that average (E') = average (E) + x. PROOF. Let E be a sequence of alternating sell and buy orders in which each seller offers p and each buyer p + x, and the first order is a sell. Then all executions take place at the ask, which is always p, and thus average (E) = p. Now suppose we modify only the first sell order to be at price p + 1 + x. This initial sell order will never be executed, and now all executions take place at the bid, which is always p + x. Similar instability results can be shown to hold for the other price-based quantities. This motivates the introduction of a quantity we call the second spread of the order books, which is defined as the difference between the prices of the second order in the sell book and the second order in the buy book (as opposed to the bid-ask difference, which is commonly called the spread). We note that in a liquid stock, such as those we examine experimentally in Section 7, the second spread will typically be quite small and in fact almost always equal to the spread. In this subsection we consider changes in the sequence only after an initialization period, and sequences such that the second spread is always defined after the time we make a change. We define s2 (E) to be the maximum second spread in the sequence E following the change. THEOREM 5.6. Let E be a sequence and let E' be any 1modification of E. Then 1. | lastbid (E) − lastbid (E') | s2 (E) 2. | lastask (E) − lastask (E') | s2 (E) where s2 (E) is the maximum over the second spread in E following the 1-modification. PROOF. We provide the proof for the last bid; the proof for the last ask is similar. The proof relies on Theorem 4.1 and considers states in the stable set S. For states ab and ¯ ab, we have that the bid is identical. Let bid (X), sb (X), ask (X), be the bid, the second highest buy order, and the ask of a sequence X. Now recall that in state a ¯ b we have that the sell books are identical, and that the two buy books are identical except one different order. Thus Now it remains to bound bid (E). Here we use the fact that the bid of the modified sequence is at least the second highest buy order in the original sequence, due to the fact that the books are different only in one order. Since In state ¯ a ¯ b we have that for one sequence the books contain an additional buy order and an additional sell order. First suppose that the books containing the additional orders are the original sequence E. Now if the bid is not the additional order we are done, otherwise we have the following: where sb (E) bid (E') since the original buy book has only one additional order. Now assume that the books with the additional orders are for the modified sequence E'. We have where we used the fact that ask (E) ask (E') since the modified sequence has an additional order. Similarly we have that bid (E) bid (E') since the modified buy book contains an additional order. We note that the proof of Theorem 5.6 actually establishes that the bid and ask of the original and modified sequences are within s2 (E) at all times. Next we provide a technical lemma which relates the (first) spread of the modified sequence to the second spread of the original sequence. LEMMA 5.7. Let E be a sequence and let E' be any 1modification of E. Then the spread of E' is bounded by s2 (E). PROOF. By the 1-Modification Theorem, we know that the books of the modified sequence and the original sequence can differ by at most one order in each book (buy and sell). Therefore, the second-highest buy order in the original sequence is always at most the bid in the modified sequence, and the second-lowest sell order in the original sequence is always at least the ask of the modified sequence. We are now ready to state a stability result for the average execution price in the absolute model. It establishes that in highly liquid markets, where the executed volume is large and the spread small, the average price is highly stable. THEOREM 5.8. Let E be a sequence and let E' be any 1modification of E. Then | average (E) − average (E') | 2 (p.ax + s2 (E)) volume (E) + s2 (E) where p.ax is the highest execution price in E. PROOF. The proof will show that every execution in E besides the execution of the modified order and the last execution has a matching execution in E' with a price different by at most s2 (E), and will use the fact that p.ax + s2 (E) is a bound on the price in E'. Referring to the proof of the 1-Modification Theorem, suppose we are in state ¯ a ¯ b, where we have in one sequence (which can be either E or E') an additional buy order b and an additional sell order a. Without loss of generality we assume that the sequence with the additional orders is E. If the next execution does not involve a or b then clearly we have the same execution in both E and E'. Suppose that it involves a; there are two possibilities. Either a is the modified order, in which case we change the average price difference by (pmax + s2 (E)) / volume (E), and this can happen only once; or a was executed before in E' and the executions both involve an order whose limit price is a. By Lemma 5.7 the spread of both sequences is bounded by s2 (E), which implies that the price of the execution in E' was at most a + s2 (E), while execution is in E is at price a, and thus the prices are different by at most s2 (E). In states ¯ ab, a ¯ b as long as we have concurrent executions in the two sequences, we know that the prices can differ by at most s2 (E). If we have an execution only in one sequence, we either match it in state ¯ a ¯ b, or charge it by (pmax + s2 (E)) / volume (E) if we end at state ¯ a ¯ b. If we end in state ab, ¯ ab or a ¯ b, then every execution in states ¯ ab or a ¯ b were matched to an execution in state ¯ a ¯ b. If we end up in state ¯ a ¯ b, we have the one execution that is not matched and thus we charge it (pmax + s2 (E)) / volume (E). We next give a stability result for the closing price. We first provide a technical lemma regarding the prices of consecutive executions. LEMMA 5.9. Let E be any sequence. Then the prices of two consecutive executions in E differ by at most s2 (E). PROOF. Suppose the first execution is taken at time t; its price is bounded below by the current bid and above by the current ask. Now after this execution the bid is at least the second highest buy order at time t, if the former bid was executed and no higher buy orders arrived, and higher otherwise. Similarly, the ask is at most the second lowest sell order at time t. Therefore, the next execution price is at least the second bid at time t and at most the second ask at time t, which is at most s2 (E) away from the bid/ask at time t. LEMMA 5.10. Let E be any sequence and let E' be a 1modification of E. If the volume (E)> 2, then PROOF. We first deal with case where the last execution occurs in both sequences simultaneously. By Theorem 5.6, both the ask and the bid of E and E' are at most s2 (E) apart at every time t. Since the price of the last execution is their asks (bids) at time t we are done. Next we deal with the case where the last execution among the two sequences occurs only in E. In this case we know that either the previous execution happened simultaneously in both sequences at time t, and thus all three executions are within the second spread of E at time t (the first execution in E by definition, the execution at E' from identical arguments as in the former case, and the third by Lemma 5.9). Otherwise the previous execution happened only in E' at time t, in which case the two executions are within the the spread of E at time t (the execution of E' from the same arguments as before, and the execution in E must be inside its spread in time t). If the last execution happens only in E' we know that the next execution of E will be at most s2 (E) away from its previous execution by Lemma 5.9. Together with the fact that if an execution happens only in one sequence it implies that the order is in the spread of the second sequence as long as the sequences are 1-modification, the proof is completed. 5.2 Spread Bounds for k-Modifications As in the case of executed volume, we would like to extend the absolute model stability results for price-based quantities to the case where multiple orders are modified. Here our results are weaker and depend on the k-spread, the distance between the kth highest buy order and the kth lowest sell order, instead of the second spread. (Looking ahead to Section 7, we note that in actual market data for liquid stocks, this quantity is often very small as well.) We use sk (E) to denote the k-spread. As before, we assume that the k-spread is always defined after an initialization period. We first state the following generalization of Lemma 5.7. LEMMA 5.11. Let E be a sequence and let E' be any 1modification of E. Fort> 1, if s cents +1 (E) is always defined after the change, then s cents (E') <s cents +1 (E). The proof is similar to the proof of Lemma 5.7 and omitted. A simple application of this lemma is the following: Let E cents be any sequence which is an $- modification of E. Then we have s2 (E cents) <s cents +2 (E). Now using the above lemma and by simple induction we can obtain the following theorem. THEOREM 5.12. Let E be a sequence and let E' be any k-modification of E. Then 1. llastbid (E)--lastbid (E') l <Ek cents = 1 s cents +1 (E) <ksk +1 (E) 2. llastask (E)--lastask (E') l <Ek cents = 1 s cents +1 (E) <ksk +1 (E) 3. lclose (E)--close (E') l <Ek cents = 1 s cents +1 (E) <ksk +1 (E) 4. laverage (E)--average (E') l < where s cents (E) is the maximum over the B-spread in E following the first modification. We note that while these bounds depend on deeper measures of spread for more modifications, we are working in a 1-share order model. Thus in an actual market, where single orders contain hundreds or thousands of shares, the k-spread even for large k might be quite small and close to the standard 1-spread in liquid stocks. 6. RELATIVE MODEL INSTABILITY In the relative model the underlying assumption is that traders try to exploit their knowledge of the books to strategically place their orders. Thus if a trader wants her buy order to be executed quickly, she may position it above the current bid and be the first in the queue; if the trader is patient and believes that the price trend is going to be downward she will place orders deeper in the buy book, and so on. While in the previous sections we showed stability results for the absolute model, here we provide simple examples which show instability in the relative model for the executed volume, last bid, last ask, average execution price and the last execution price. In Section 7 we provide many simulations on actual market data that demonstrate that this instability is inherent to the relative model, and not due to artificial constructions. In the relative model we assume that for every sequence the ask and bid are always defined, so the books have a non-empty initial configuration. We begin by showing that in the relative model, even a single modification can double the number of shares executed. THEOREM 6.1. There is a sequence E and a 1-modification E' of E such that volume (E') 2volume (E). PROOF. For concreteness we assume that at the beginning the ask is 10 and the bid is 8. The sequence E is composed from n buy orders with A = 0, followed by n sell orders with A = 0, and finally an alternating sequence of buy orders with A = +1 and sell orders with A = − 1 of length 2n. Since the books before the alternating sequence contain n +1 sell orders at 10 and n +1 buy orders at 8, we have that each pair of buy sell order in the alternating part is matched and executed, but none of the initial 2n orders is executed, and thus volume (E) = n. Now we change the first buy order to have A = +1. After the first 2n orders there are still no executions; however, the books are different. Now there are n +1 sell orders at 10, n buy orders at 9 and one buy order at 8. Now each order in the alternating sequence is executed with one of the former orders and we have volume (E') = 2n. The next theorem shows that the spread-based stability results of Section 5.1 do not also hold in the relative model. Before providing the proof, we give its intuition. At the beginning the sell book contains only two prices which are far apart and both contain only two orders, now several buy orders arrive, at the original sequence they are not being executed, while in the modified sequence they will be executed and leave the sell book with only the orders at the high price. Now many sell orders followed by many buy orders will arrive, such that in the original sequence they will be executed only at the low price and in the modified sequence they will executed at the high price. THEOREM 6.2. For any positive numbers s and x, there is sequence E such that s2 (E) = s and a 1-modification E' of E such that • | close (E) − close (E') | x • | average (E) − average (E') | x • | lastbid (E) − lastbid (E') | x • | lastask (E) − lastask (E') | x PROOF. Without loss of generality let us consider sequences in which all prices are integer-valued, in which case the smallest possible value for the second spread is 1; we provide the proof for the case s2 (E) = 2, but the s2 (E) = 1 case is similar. We consider a sequence E such that after an initialization period there have been no executions, the buy book has 2 orders at price 10, and the sell book has two orders at price 12 and 2 orders with value 12 + y, where y is a positive integer that will be determined by the analysis. The original sequence E is a buy order with A = 0, followed by two buy orders with A = +1, then 2y sell orders with A = 0, and then 2y buy orders with A = +1. We first note that s2 (E) = 2, there are 2y executions, all at price 12, the last bid is 11 and the last ask is 12. Next we analyze a modified sequence. We change the first buy order from A = 0 to A = +1. Therefore, the next two buy orders with A = +1 are executed, and afterwards we have that the bid is 11 and the ask is 12 + y. Now the 2y sell orders are accumulated at 12 + y, and after the next y buy orders the bid is at 12 + y − 1. Therefore, at the end we have that lastbid (E') = 12 + y − 1, y y +2 (12). Setting y = x + 2, we obtain the lemma for every property. We note that while this proof was based on the fact that there are two consecutive orders in the books which are far (y) apart, we can provide a slightly more complicated example in which all orders are close (at most 2 apart), yet still one change results in large differences. 7. SIMULATION STUDIES The results presented so far paint a striking contrast between the absolute and relative price models: while the absolute model enjoys provably strong stability over any fixed event sequence, there exist at least specific sequences demonstrating great instability in the relative model. The worstcase nature of these results raises the question of the extent to which such differences could actually occur in real markets. In this section we provide indirect evidence on this question by presenting simulation results exploiting a rich source of real-market historical limit order sequence data. By interpreting arriving limit order prices as either absolute values, or by transforming them into differences with the current bid and ask (relative model), we can perform small modifications on the sequences and examine how different various outcomes (volume traded, average price, etc.) would be from what actually occurred in the market. These simulations provide an empirical counterpart to the theory we have developed. We emphasize that all such simulations interpret the actual historical data as falling into either the absolute or relative model, and are meaningful only within the confines of such an interpretation. Nevertheless, we feel they provide valuable empirical insight into the potential (in) stability properties of modern equity limit order markets, and demonstrate that one's belief or hope in stability largely relies on an absolute model interpretation. We also investigate the empirical behavior of mixtures of absolute and relative prices. 7.1 Data The historical data used in our simulations is commercially available limit order data from INET, the previously mentioned electronic exchange for NASDAQ stocks. Broadly speaking, this data consists of practically every single event on INET regarding the trading of an individual stock--every arriving limit order (price, volume, and sequence ID number), every execution, and every cancellation of a standing order--all timestamped in milliseconds. It is data sufficient to recreate the precise INET order book in a given stock on a given day and time. We will report stability properties for three stocks: Amazon, Nvidia, and Qualcomm (identified in the sequel by their tickers, AMZN, NVDA and QCOM). These three provide some range of liquidities (with QCOM having the greatest and NVDA the least liquidity on INET) and other trading properties. We note that the qualitative results of our simulations were similar for several other stocks we examined. 7.2 Methodology For our simulations we employed order-book reconstruction code operating on the underlying raw data. The basic format of each experiment was the following: 1. Run the order book reconstruction code on the original INET data and compute the quantity of interest (volume traded, average price, etc.) 2. Make a small modification to a single order, and recompute the resulting value of the quantity of interest. In the absolute model case, Step 2 is as simple as modifying the order in the original data and re-running the order book reconstruction. For the relative model, we must first pre-process the raw data and convert its prices to relative values, then make the modification and re-run the order book reconstruction on the relative values. The type of modification we examined was extremely small compared to the volume of orders placed in these stocks: namely, the deletion of a single randomly chosen order from the sequence. Although a deletion is not 1-modification, its edit distance is 1 and we can apply Theorem 5.4. For each trading day examined, this single deleted order was selected among those arriving between 10 AM and 3 PM, and the quantities of interest were measured and compared at 3 PM. These times were chosen to include the busiest part of the trading day but avoid the half hour around the opening and closing of the official NASDAQ market (9:30 AM and 3:30 PM respectively), which are known to have different dynamics than the central portion of the day. We run the absolute and relative model simulations on both the raw INET data and on a "cleaned" version of this data. In the "cleaned" we remove all limit orders that were canceled in the actual market prior to their execution (along with the cancellations themselves). The reason is that such cancellations may often be the first step in the "repositioning" of orders--that is, cancellations of the order that are followed by the submission of a replacement order at a different price. Not removing canceled orders allows the possibility of modified simulations in which the "same" order 1 is executed twice, which may magnify instability effects. Again, it is clear that neither the raw nor the "cleaned" data can perfectly reflect "what would have happened" under the deleted orders in the actual market. However, the results both from the raw data and the clean data are qualitatively similar. The results mainly differ, as expected, in the executed volume, where the instability results for the relative model are much more dramatic in the raw data. 7.3 Results We begin with summary statistics capturing our overall stability findings. Each row of the tables below contains a ticker (e.g. AMZN) followed by either - R (for the uncleaned or raw data) or - C (for the data with canceled orders removed). For each of the approximately 250 trading days in 2003, 1000 trials were run in which a randomly selected order was deleted from the INET event sequence. For each quantity of interest (volume executed, average price, closing price and last bid), we show for the both the absolute and 1Here "same" is in quotes since the two orders will actually have different sequence ID numbers, which is what makes such repositioning activity impossible to reliably detect in the data. relative model the average percentage change in the quantity induced by the deletion. The results confirm rather strikingly the qualitative conclusions of the theory we have developed. In virtually every case (stock, raw or cleaned data, and quantity) the percentage change induced by a single deletion in the relative model is many orders of magnitude greater than in the absolute model, and shows that indeed "butterfly effects" may occur in a relative model market. As just one specific representative example, notice that for QCOM on the cleaned data, the relative model effect of just a single deletion on the closing price is in excess of a full percentage point. This is a variety of market impact entirely separate from the more traditional and expected kind generated by trading a large volume of shares. In Figure 4 we examine how the change to one the quantities, the average execution price, grows with the introduction of greater perturbations of the event sequence in the two models. Rather than deleting only a single order between 10 AM and 3 PM, in these experiments a growing number of randomly chosen deletions was performed, and the percentage change to the average price measured. As suggested by the theory we have developed, for the absolute model the change to the average price grows linearly with the number of deletions and remains very small (note the vastly different scales of the y-axis in the panels for the absolute and relative models in the figure). For the relative model, it is interesting to note that while small numbers of changes have large effects (often causing average execution price changes well in excess of 0.1 percent), the effects of large numbers of changes levels off quite rapidly and consistently. We conclude with an examination of experiments with a mixture model. Even if one accepts a world in which traders behave in either an absolute or relative manner, one would be likely to claim that the market contains a mixture of both. We thus ran simulations in which each arriving order in the INET event streams was treated as an absolute price with probability a, and as a relative price with probability 1 − a. Representative results for the average execution price in this mixture model are shown in Figure 5 for AMZN and NVDA. Perhaps as expected, we see a monotonic decrease in the percentage change (instability) as the fraction of absolute traders increases, with most of the reduction already being realized by the introduction of just a small population of absolute traders. Thus even in a largely relative-price world, a Figure 4: Percentage change to the average execu tion price (y-axis) as a function of the number of deletions to the sequence (x-axis). The left panel is for the absolute model, the right panel for the relative model, and each curve corresponds to a single day of QCOM trading in June 2004. Curves represent averages over 1000 trials. small minority of absolute traders can have a greatly stabilizing effect. Similar behavior is found for closing price and last bid. Figure 5: Percentage change to the average execu tion price (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis). Each curve corresponds to a single day of trading during a month of 2004. Curves represent averages over 1000 trials. For the executed volume in the mixture model, however, the findings are more curious. In Figure 6, we show how the percentage change to the executed volume varies with the absolute trader fraction a, for NVDA data that is both raw and cleaned of cancellations. We first see that for this quantity, unlike the others, the difference induced by the cleaned and uncleaned data is indeed dramatic, as already suggested by the summary statistics table above. But most intriguing is the fact that the stability is not monotonically increasing with a for either the cleaned or uncleaned data--the market with maximum instability is not a pure relative price market, but occurs at some nonzero value for a. It was in fact not obvious to us that sequences with this property could even be artificially constructed, much less that they would occur as actual market data. We have yet to find a satisfying explanation for this phenomenon and leave it to future research.
(In) Stability Properties of Limit Order Dynamics ABSTRACT We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets. We ask whether such mechanisms are susceptible to "butterfly effects"--the infliction of large changes on common measures of market activity by only small perturbations of the order sequence. We show that the answer depends strongly on whether the market consists of "absolute" traders (who determine their prices independent of the current order book state) or "relative" traders (who determine their prices relative to the current bid and ask). We prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability. Our theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks. 1. INTRODUCTION In recent years there has been an explosive increase in the automation of modern equity markets. This increase has taken place both in the exchanges, which are increasingly computerized and offer sophisticated interfaces for order placement and management, and in the trading activity itself, which is ever more frequently undertaken by software. The so-called Electronic Communication Networks (or ECNs) that dominate trading in NASDAQ stocks are a common example of the automation of the exchanges. On the trading side, computer programs now are entrusted not only with the careful execution of large block trades for clients (sometimes referred to on Wall Street as program trading), but with the autonomous selection of stocks, direction (long or short) and volumes to trade for profit (commonly referred to as statistical arbitrage). The vast majority of equity trading is done via the standard limit order market mechanism. In this mechanism, continuous trading takes place via the arrival of limit orders specifying whether the party wishes to buy or sell, the volume desired, and the price offered. Arriving limit orders that are entirely or partially executable with the best offers on the other side are executed immediately, with any volume not immediately executable being placed in an queue (or book) ordered by price on the appropriate side (buy or sell). (A detailed description of the limit order mechanism is given in Section 3.) While traders have always been able to view the prices at the top of the buy and sell books (known as the bid and ask), a relatively recent development in certain exchanges is the real-time revelation of the entire order book--the complete distribution of orders, prices and volumes on both sides of the exchange. With this revelation has come the opportunity--and increasingly, the need--for modeling and exploiting limit order data and dynamics. It is fair to say that market microstructure, as this area is generally known, is a topic commanding great interest both in the real markets and in the academic finance literature. The opportunities and needs span the range from the optimized execution of large trades to the creation of stand-alone "proprietary" strategies that attempt to profit from high-frequency microstructure signals. In this paper we investigate a previously unexplored but fundamental aspect of limit order microstructure: the stability properties of the dynamics. Specifically, we are interested in the following natural question: To what extent are simple models of limit order markets either susceptible or immune to "butterfly effects"--that is, the infliction of large changes in important activity statistics (such as the number of shares traded or the average price per share) by only minor perturbations of the order sequence? To examine this question, we consider two stylized but natural models of the limit order arrival process. In the absolute price model, buyers and sellers arrive with limit order prices that are determined independently of the current state of the market (as represented by the order books), though they may depend on all manner of exogenous information or shocks, such as time, news events, announcements from the company whose shares are being traded, private signals or state of the individual traders, etc. . This process models traditional "fundamentals" - based trading, in which market participants each have some inherent but possibly varying valuation for the good that in turn determines their limit price. In contrast, in the relative price model, traders express their limit order prices relative to the best price offered in their respective book (buy or sell). Thus, a buyer would encode their limit order price as an offset A (which may be positive, negative, or zero) from the current bid Pb, which is then translated to the limit price Pb + A. Again, in addition to now depending on the state of the order books, prices may also depend on all manner of exogenous information. The relative price model can be viewed as modeling traders who, in addition to perhaps incorporating fundamental external information on the stock, may also position their orders strategically relative to the other orders on their side of the book. A common example of such strategic behavior is known as "penny-jumping" on Wall Street, in which a trader who has in interest in buying shares quickly, but still at a discount to placing a market order, will deliberately position their order just above the current bid. More generally, the entire area of modern execution optimization [9, 10, 8] has come to rely heavily on the careful positioning of limit orders relative to the current order book state. Note that such positioning may depend on more complex features of the order books than just the current bid and ask, but the relative model is a natural and simplified starting point. We remark that an alternate view of the two models is that all traders behave in a relative manner, but with "absolute" traders able to act only on a considerably slower time scale than the faster "relative" traders. How do these two models differ? Clearly, given any fixed sequence of arriving limit order prices, we can choose to express these prices either as their original (absolute) values, or we can run the order book dynamical process and transform each order into a relative difference with the top of its book, and obtain identical results. The differences arise when we consider the stability question introduced above. Intuitively, in the absolute model a small perturbation in the arriving limit price sequence should have limited (but still some) effects on the subsequent evolution of the order books, since prices are determined independently. For the relative model this intuition is less clear. It seems possible that a small perturbation could (for example) slightly modify the current bid, which in turn could slightly modify the price of the next arriving order, which could then slightly modify the price of the subsequent order, and so on, leading to an amplifying sequence of events. Our main results demonstrate that these two models do indeed have dramatically different stability properties. We first show that for any fixed sequence of prices in the absolute model, the modification of a single order has a bounded and extremely limited impact on the subsequent evolution of the books. In particular, we define a natural notion of distance between order books and show that small modifications can result in only constant distance to the original books for all subsequent time steps. We then show that this implies that for almost any standard statistic of market activity--the executed volume, the average price execution price, and many others--the statistic can be influenced only infinitesimally by small perturbations. In contrast, we show that the relative model enjoys no such stability properties. After giving specific (worst-case) relative price sequences in which small perturbations generate large changes in basic statistics (for example, altering the number of shares traded by a factor of two), we proceed to demonstrate that the difference in stability properties of the two models is more than merely theoretical. Using extensive INET (a major ECN for NASDAQ stocks) limit order data and order book reconstruction code, we investigate the empirical stability properties when the data is interpreted as containing either absolute prices, relative prices, or mixtures of the two. The theoretical predictions of stability and instability are strongly borne out by the subsequent experiments. In addition to stability being of fundamental interest in any important dynamical system, we believe that the results described here provide food for thought on the topics of market impact and the "backtesting" of quantitative trading strategies (the attempt to determine hypothetical past performance using historical data). They suggest that one's confidence that trading "quietly" and in small volumes will have minimal market impact is linked to an implicit belief in an absolute price model. Our results and the fact that in the real markets there is a large and increasing amount of relative behavior such as penny-jumping would seem to cast doubts on such beliefs. Similarly, in a purely or largely relative-price world, backtesting even low-frequency, low-volume strategies could result in historical estimates of performance that are not only unrelated to future performance (the usual concern), but are not even accurate measures of a hypothetical past. The outline of the paper follows. In Section 2 we briefly review the large literature on market microstructure. In Section 3 we describe the limit order mechanism and our formal models. Section 4 presents our most important theoretical results, the 1-Modification Theorem for the absolute price model. This theorem is applied in Section 5 to derive a number of strong stability properties in the absolute model. Section 6 presents specific examples establishing the worstcase instability of the relative model. Section 7 contains the simulation studies that largely confirm our theoretical findings on INET market data. 2. RELATED WORK As was mentioned in the Introduction, market microstructure is an important and timely topic both in academic finance and on Wall Street, and consequently has a large and varied recent literature. Here we have space only to summarize the main themes of this literature and to provide pointers to further readings. To our knowledge the stability properties of detailed limit order microstructure dynamics have not been previously considered. (However, see Farmer and Joshi [6] for an example and survey of other price dynamic stability studies.) On the more theoretical side, there is a rich line of work examining what might be considered the game-theoretic properties of limit order markets. These works model traders and market-makers (who provide liquidity by offering both buy and sell quotes, and profit on the difference) by utility functions incorporating tolerance for risks of price movement, large positions and other factors, and examine the resulting equilibrium prices and behaviors. Common findings predict negative price impacts for large trades, and price effects for large inventory holdings by market-makers. An excellent and comprehensive survey of results in this area can be found in [2]. There is a similarly large body of empirical work on microstructure. Major themes include the measurement of price impacts, statistical properties of limit order books, and attempts to establish the informational value of order books [4]. A good overview of the empirical work can be found in [7]. Of particular note for our interests is [3], which empirically studies the distribution of arriving limit order prices in several prominent markets. This work takes a view of arriving prices analogous to our relative model, and establishes a power-law form for the resulting distributions. There is also a small but growing number of works examining market microstructure topics from a computer science perspective, including some focused on the use of microstructure in algorithms for optimized trade execution. Kakade et al. [9] introduced limit order dynamics in competitive analysis for one-way and volume-weighted average price (VWAP) trading. Some recent papers have applied reinforcement learning methods to trade execution using order book properties as state variables [1, 5, 10]. 3. MICROSTRUCTURE PRELIMINARIES 3.1 Formal Definitions 4. THE 1-MODIFICATION THEOREM 5. ABSOLUTE MODEL STABILITY 5.1 Spread Bounds 5.2 Spread Bounds for k-Modifications 6. RELATIVE MODEL INSTABILITY 7. SIMULATION STUDIES 7.1 Data 7.2 Methodology 7.3 Results We begin with summary statistics capturing our overall stability findings. Each row of the tables below contains a ticker (e.g. AMZN) followed by either - R (for the uncleaned or raw data) or - C (for the data with canceled orders removed). For each of the approximately 250 trading days in 2003, 1000 trials were run in which a randomly selected order was deleted from the INET event sequence. For each quantity of interest (volume executed, average price, closing price and last bid), we show for the both the absolute and 1Here "same" is in quotes since the two orders will actually have different sequence ID numbers, which is what makes such repositioning activity impossible to reliably detect in the data. relative model the average percentage change in the quantity induced by the deletion. The results confirm rather strikingly the qualitative conclusions of the theory we have developed. In virtually every case (stock, raw or cleaned data, and quantity) the percentage change induced by a single deletion in the relative model is many orders of magnitude greater than in the absolute model, and shows that indeed "butterfly effects" may occur in a relative model market. As just one specific representative example, notice that for QCOM on the cleaned data, the relative model effect of just a single deletion on the closing price is in excess of a full percentage point. This is a variety of market impact entirely separate from the more traditional and expected kind generated by trading a large volume of shares. In Figure 4 we examine how the change to one the quantities, the average execution price, grows with the introduction of greater perturbations of the event sequence in the two models. Rather than deleting only a single order between 10 AM and 3 PM, in these experiments a growing number of randomly chosen deletions was performed, and the percentage change to the average price measured. As suggested by the theory we have developed, for the absolute model the change to the average price grows linearly with the number of deletions and remains very small (note the vastly different scales of the y-axis in the panels for the absolute and relative models in the figure). For the relative model, it is interesting to note that while small numbers of changes have large effects (often causing average execution price changes well in excess of 0.1 percent), the effects of large numbers of changes levels off quite rapidly and consistently. We conclude with an examination of experiments with a mixture model. Even if one accepts a world in which traders behave in either an absolute or relative manner, one would be likely to claim that the market contains a mixture of both. We thus ran simulations in which each arriving order in the INET event streams was treated as an absolute price with probability a, and as a relative price with probability 1 − a. Representative results for the average execution price in this mixture model are shown in Figure 5 for AMZN and NVDA. Perhaps as expected, we see a monotonic decrease in the percentage change (instability) as the fraction of absolute traders increases, with most of the reduction already being realized by the introduction of just a small population of absolute traders. Thus even in a largely relative-price world, a Figure 4: Percentage change to the average execu tion price (y-axis) as a function of the number of deletions to the sequence (x-axis). The left panel is for the absolute model, the right panel for the relative model, and each curve corresponds to a single day of QCOM trading in June 2004. Curves represent averages over 1000 trials. small minority of absolute traders can have a greatly stabilizing effect. Similar behavior is found for closing price and last bid. Figure 5: Percentage change to the average execu tion price (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis). Each curve corresponds to a single day of trading during a month of 2004. Curves represent averages over 1000 trials. For the executed volume in the mixture model, however, the findings are more curious. In Figure 6, we show how the percentage change to the executed volume varies with the absolute trader fraction a, for NVDA data that is both raw and cleaned of cancellations. We first see that for this quantity, unlike the others, the difference induced by the cleaned and uncleaned data is indeed dramatic, as already suggested by the summary statistics table above. But most intriguing is the fact that the stability is not monotonically increasing with a for either the cleaned or uncleaned data--the market with maximum instability is not a pure relative price market, but occurs at some nonzero value for a. It was in fact not obvious to us that sequences with this property could even be artificially constructed, much less that they would occur as actual market data. We have yet to find a satisfying explanation for this phenomenon and leave it to future research.
(In) Stability Properties of Limit Order Dynamics ABSTRACT We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets. We ask whether such mechanisms are susceptible to "butterfly effects"--the infliction of large changes on common measures of market activity by only small perturbations of the order sequence. We show that the answer depends strongly on whether the market consists of "absolute" traders (who determine their prices independent of the current order book state) or "relative" traders (who determine their prices relative to the current bid and ask). We prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability. Our theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks. 1. INTRODUCTION In recent years there has been an explosive increase in the automation of modern equity markets. The vast majority of equity trading is done via the standard limit order market mechanism. In this mechanism, continuous trading takes place via the arrival of limit orders specifying whether the party wishes to buy or sell, the volume desired, and the price offered. Arriving limit orders that are entirely or partially executable with the best offers on the other side are executed immediately, with any volume not immediately executable being placed in an queue (or book) ordered by price on the appropriate side (buy or sell). (A detailed description of the limit order mechanism is given in Section 3.) With this revelation has come the opportunity--and increasingly, the need--for modeling and exploiting limit order data and dynamics. In this paper we investigate a previously unexplored but fundamental aspect of limit order microstructure: the stability properties of the dynamics. Specifically, we are interested in the following natural question: To what extent are simple models of limit order markets either susceptible or immune to "butterfly effects"--that is, the infliction of large changes in important activity statistics (such as the number of shares traded or the average price per share) by only minor perturbations of the order sequence? To examine this question, we consider two stylized but natural models of the limit order arrival process. This process models traditional "fundamentals" - based trading, in which market participants each have some inherent but possibly varying valuation for the good that in turn determines their limit price. In contrast, in the relative price model, traders express their limit order prices relative to the best price offered in their respective book (buy or sell). The relative price model can be viewed as modeling traders who, in addition to perhaps incorporating fundamental external information on the stock, may also position their orders strategically relative to the other orders on their side of the book. More generally, the entire area of modern execution optimization [9, 10, 8] has come to rely heavily on the careful positioning of limit orders relative to the current order book state. Note that such positioning may depend on more complex features of the order books than just the current bid and ask, but the relative model is a natural and simplified starting point. We remark that an alternate view of the two models is that all traders behave in a relative manner, but with "absolute" traders able to act only on a considerably slower time scale than the faster "relative" traders. How do these two models differ? Clearly, given any fixed sequence of arriving limit order prices, we can choose to express these prices either as their original (absolute) values, or we can run the order book dynamical process and transform each order into a relative difference with the top of its book, and obtain identical results. The differences arise when we consider the stability question introduced above. Intuitively, in the absolute model a small perturbation in the arriving limit price sequence should have limited (but still some) effects on the subsequent evolution of the order books, since prices are determined independently. For the relative model this intuition is less clear. It seems possible that a small perturbation could (for example) slightly modify the current bid, which in turn could slightly modify the price of the next arriving order, which could then slightly modify the price of the subsequent order, and so on, leading to an amplifying sequence of events. Our main results demonstrate that these two models do indeed have dramatically different stability properties. We first show that for any fixed sequence of prices in the absolute model, the modification of a single order has a bounded and extremely limited impact on the subsequent evolution of the books. In particular, we define a natural notion of distance between order books and show that small modifications can result in only constant distance to the original books for all subsequent time steps. We then show that this implies that for almost any standard statistic of market activity--the executed volume, the average price execution price, and many others--the statistic can be influenced only infinitesimally by small perturbations. In contrast, we show that the relative model enjoys no such stability properties. After giving specific (worst-case) relative price sequences in which small perturbations generate large changes in basic statistics (for example, altering the number of shares traded by a factor of two), we proceed to demonstrate that the difference in stability properties of the two models is more than merely theoretical. Using extensive INET (a major ECN for NASDAQ stocks) limit order data and order book reconstruction code, we investigate the empirical stability properties when the data is interpreted as containing either absolute prices, relative prices, or mixtures of the two. The theoretical predictions of stability and instability are strongly borne out by the subsequent experiments. They suggest that one's confidence that trading "quietly" and in small volumes will have minimal market impact is linked to an implicit belief in an absolute price model. Our results and the fact that in the real markets there is a large and increasing amount of relative behavior such as penny-jumping would seem to cast doubts on such beliefs. The outline of the paper follows. In Section 2 we briefly review the large literature on market microstructure. In Section 3 we describe the limit order mechanism and our formal models. Section 4 presents our most important theoretical results, the 1-Modification Theorem for the absolute price model. This theorem is applied in Section 5 to derive a number of strong stability properties in the absolute model. Section 6 presents specific examples establishing the worstcase instability of the relative model. Section 7 contains the simulation studies that largely confirm our theoretical findings on INET market data. 2. RELATED WORK As was mentioned in the Introduction, market microstructure is an important and timely topic both in academic finance and on Wall Street, and consequently has a large and varied recent literature. To our knowledge the stability properties of detailed limit order microstructure dynamics have not been previously considered. (However, see Farmer and Joshi [6] for an example and survey of other price dynamic stability studies.) On the more theoretical side, there is a rich line of work examining what might be considered the game-theoretic properties of limit order markets. These works model traders and market-makers (who provide liquidity by offering both buy and sell quotes, and profit on the difference) by utility functions incorporating tolerance for risks of price movement, large positions and other factors, and examine the resulting equilibrium prices and behaviors. Common findings predict negative price impacts for large trades, and price effects for large inventory holdings by market-makers. An excellent and comprehensive survey of results in this area can be found in [2]. There is a similarly large body of empirical work on microstructure. Major themes include the measurement of price impacts, statistical properties of limit order books, and attempts to establish the informational value of order books [4]. A good overview of the empirical work can be found in [7]. Of particular note for our interests is [3], which empirically studies the distribution of arriving limit order prices in several prominent markets. This work takes a view of arriving prices analogous to our relative model, and establishes a power-law form for the resulting distributions. There is also a small but growing number of works examining market microstructure topics from a computer science perspective, including some focused on the use of microstructure in algorithms for optimized trade execution. Kakade et al. [9] introduced limit order dynamics in competitive analysis for one-way and volume-weighted average price (VWAP) trading. Some recent papers have applied reinforcement learning methods to trade execution using order book properties as state variables [1, 5, 10]. 7.3 Results We begin with summary statistics capturing our overall stability findings. Each row of the tables below contains a ticker (e.g. AMZN) followed by either - R (for the uncleaned or raw data) or - C (for the data with canceled orders removed). For each of the approximately 250 trading days in 2003, 1000 trials were run in which a randomly selected order was deleted from the INET event sequence. For each quantity of interest (volume executed, average price, closing price and last bid), we show for the both the absolute and 1Here "same" is in quotes since the two orders will actually have different sequence ID numbers, which is what makes such repositioning activity impossible to reliably detect in the data. relative model the average percentage change in the quantity induced by the deletion. The results confirm rather strikingly the qualitative conclusions of the theory we have developed. In virtually every case (stock, raw or cleaned data, and quantity) the percentage change induced by a single deletion in the relative model is many orders of magnitude greater than in the absolute model, and shows that indeed "butterfly effects" may occur in a relative model market. As just one specific representative example, notice that for QCOM on the cleaned data, the relative model effect of just a single deletion on the closing price is in excess of a full percentage point. This is a variety of market impact entirely separate from the more traditional and expected kind generated by trading a large volume of shares. In Figure 4 we examine how the change to one the quantities, the average execution price, grows with the introduction of greater perturbations of the event sequence in the two models. Rather than deleting only a single order between 10 AM and 3 PM, in these experiments a growing number of randomly chosen deletions was performed, and the percentage change to the average price measured. As suggested by the theory we have developed, for the absolute model the change to the average price grows linearly with the number of deletions and remains very small (note the vastly different scales of the y-axis in the panels for the absolute and relative models in the figure). For the relative model, it is interesting to note that while small numbers of changes have large effects (often causing average execution price changes well in excess of 0.1 percent), the effects of large numbers of changes levels off quite rapidly and consistently. We conclude with an examination of experiments with a mixture model. Even if one accepts a world in which traders behave in either an absolute or relative manner, one would be likely to claim that the market contains a mixture of both. We thus ran simulations in which each arriving order in the INET event streams was treated as an absolute price with probability a, and as a relative price with probability 1 − a. Representative results for the average execution price in this mixture model are shown in Figure 5 for AMZN and NVDA. Thus even in a largely relative-price world, a Figure 4: Percentage change to the average execu tion price (y-axis) as a function of the number of deletions to the sequence (x-axis). The left panel is for the absolute model, the right panel for the relative model, and each curve corresponds to a single day of QCOM trading in June 2004. Curves represent averages over 1000 trials. small minority of absolute traders can have a greatly stabilizing effect. Similar behavior is found for closing price and last bid. Figure 5: Percentage change to the average execu tion price (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis). Each curve corresponds to a single day of trading during a month of 2004. Curves represent averages over 1000 trials. For the executed volume in the mixture model, however, the findings are more curious. In Figure 6, we show how the percentage change to the executed volume varies with the absolute trader fraction a, for NVDA data that is both raw and cleaned of cancellations. But most intriguing is the fact that the stability is not monotonically increasing with a for either the cleaned or uncleaned data--the market with maximum instability is not a pure relative price market, but occurs at some nonzero value for a. It was in fact not obvious to us that sequences with this property could even be artificially constructed, much less that they would occur as actual market data.